Search Results: "grant"

28 October 2022

Shirish Agarwal: Shantaram, The Pyramid, Japan s Hikikomori & Backpack

Shantaram I know I have been quite behind in review of books but then that s life. First up is actually not as much as a shocker but somewhat of a pleasant surprise. So, a bit of background before I share the news. If you have been living under a rock, then about 10-12 years ago a book called Shantaram was released. While the book is said to have been released in 2003/4 I got it in my hand around 2008/09 or somewhere around that. The book is like a good meal, a buffet. To share the synopsis, Lin a 20 something Australian guy gets involved with a girl, she encourages him to get into heroin, he becomes a heroin user. And drugs, especially hard drugs need constant replenishment, it is a chemical thing. So, to fund those cravings, he starts to steal, rising to rob a bank and while getting away shoots a cop who becomes dead. Now either he surrenders or is caught is unclear, but he is tortured in the jail. So one day, he escapes from prison, lands up at home of somebody who owes him a favor, gets some money, gets a fake passport and lands up in Mumbai/Bombay as it was then known. This is from where the actual story starts. And how a 6 foot something Australian guy relying on his street smartness and know how the transformation happens from Lin to Shantaram. Now what I have shared is perhaps just 5% of the synopsis, as shared the real story starts here. Now the good news, last week 4 episodes of Shantaram were screened by Apple TV. Interestingly, I have seen quite a number people turning up to buy or get this book and also sharing it on Goodreads. Now there seems to have been some differences from the book to TV. Now I m relying on 10-12 year back memory but IIRC Khaderbhai, one of the main characters who sort of takes Lin/Shantaram under his wing is an Indian. In the series, he is a western or at least looks western/Middle Eastern to me. Also, they have tried to reproduce 1980s in Mumbai/Bombay but dunno how accurate they were  My impression of that city from couple of visits at that point in time where they were still more tongas (horse-ridden carriages), an occasional two wheelers and not many three wheelers. Although, it was one of the more turbulent times as lot of agitation for worker rights were happening around that time and a lot of industrial action. Later that led to lot of closure of manufacturing in Bombay and it became more commercial. It would be interesting to know whether they shot it in actual India or just made a set somewhere in Australia, where it possibly might have been shot. The chawl of the book needs a bit of arid land and Australia has lots of it. It is also interesting as this was a project that had who s who interested in it for a long time but somehow none of them was able to bring the project to fruition, the project seems to largely have an Australian cast as well as second generations of Indians growing in Australia. To take names, Amitabh Bacchan, Johnny Depp, Russel Crowe each of them wanted to make it into a feature film. In retrospect, it is good it was not into a movie, otherwise they would have to cut a lot of material and that perhaps wouldn t have been sufficient. Making it into a web series made sure they could have it in multiple seasons if people like it. There is a lot between now and 12 episodes to even guess till where it would leave you then. So, if you have not read the book and have some holidays coming up, can recommend it. The writing IIRC is easy and just flows. There is a bit of action but much more nuance in the book while in the web series they are naturally more about action. There is also quite a bit of philosophy between him and Kaderbhai and while the series touches upon it, it doesn t do justice but then again it is being commercially made. Read the book, see the series and share your thoughts on what you think. It is possible that the series might go up or down but am sharing from where I see it, may do another at the end of the season, depending on where they leave it and my impressions. Update A slight update from the last blog post. Seems Rishi Sunak seems would be made PM of UK. With Hunt as chancellor and Rishi Sunak, Austerity 2.0 seems complete. There have been numerous articles which share how austerity gives rises to fascism and vice-versa. History gives lot of lessons about the same. In Germany, when the economy was not good, it was all blamed on the Jews for number of years. This was the reason for rise of Hitler, and while it did go up by a bit, propaganda by him and his loyalists did the rest. And we know and have read about the Holocaust. Today quite a few Germans deny it or deny parts of it but that s how misinformation spreads. Also Hitler is looked now more as an aberration rather than something to do with the German soul. I am not gonna talk more as there is still lots to share and that actually perhaps requires its own blog post to do justice for the same.

The Pyramid by Henning Mankell I had actually wanted to review this book but then the bomb called Shantaram appeared and I had to post it above. I had read two-three books before it, but most of them were about multiple beheadings and serial killers. Enough to put anybody into depression. I do not know if modern crime needs to show crime and desperation of and to such a level. Why I and most loved and continue to love Sherlock Holmes as most stories were not about gross violence but rather a homage to the art of deduction, which pretty much seems to be missing in modern crime thrillers rather than grotesque stuff. In that, like a sort of fresh air I read/am reading the Pyramid by Henning Mankell. The book is about a character made by Monsieur Henning Mankell named Kurt Wallender. I am aware of the series called Wallender but haven t yet seen it. The book starts with Wallender as a beat cop around age 20 and on his first case. He is ambitious, wants to become a detective and has a narrow escape with death. I wouldn t go much into it as it basically gives you an idea of the character and how he thinks and what he does. He is more intuitive by nature and somewhat of a loner. Probably most detectives IRL are, dunno, have no clue. At least in the literary world it makes sense, in real world think there would be much irony for sure. This is speculation on my part, who knows. Back to the book though. The book has 5 stories a sort of prequel one could say but also not entirely true. The first case starts when he is a beat cop in 1969 and he is just a beat cop. It is a kind of a prequel and a kind of an anthology as he covers from the first case to the 1990s where he is ending his career sort of. Before I start sharing about the stories in the book, I found the foreword also quite interesting. It asks questions about the interplay of the role of welfare state and the Swedish democracy. Incidentally did watch couple of videos about a sort of mixed sort of political representation that happens in Sweden. It uses what is known as proportional representation. Ironically, Sweden made a turn to the far right this election season. The book was originally in Swedish and were translated to English by Ebba Segerberg and Laurie Thompson. While all the stories are interesting, would share the last three or at least ask the questions of intrigue. Of course, to answer them you would need to read the book  So the last three stories I found the most intriguing. The first one is titled Man on the Beach. Apparently, a gentleman goes to one of the beaches, a sort of lonely beach, hails a taxi and while returning suddenly dies. The Taxi driver showing good presence of mind takes it to hospital where the gentleman is declared dead on arrival. Unlike in India, he doesn t run away but goes to the cafeteria and waits there for the cops to arrive and take his statement. Now the man is in his early 40s and looks to be fit. Upon searching his pockets he is found to relatively well-off and later it turns out he owns a couple of shops. So then here are the questions ? What was the man doing on a beach, in summer that beach is somewhat popular but other times not so much, so what was he doing there? How did he die, was it a simple heart attack or something more? If he had been drugged or something then when and how? These and more questions can be answered by reading the story Man on the Beach . 2. The death of a photographer Apparently, Kurt lives in a small town where almost all the residents have been served one way or the other by the town photographer. The man was polite and had worked for something like 40 odd years before he is killed/murdered. Apparently, he is murdered late at night. So here come the questions a. The shop doesn t even stock any cameras and his cash box has cash. Further investigation reveals it is approximate to his average takeout for the day. So if it s not for cash, then what is the motive ? b. The body was discovered by his cleaning staff who has worked for almost 20 years, 3 days a week. She has her own set of keys to come and clean the office? Did she give the keys to someone, if yes why? c. Even after investigation, there is no scandal about the man, no other woman or any vices like gambling etc. that could rack up loans. Also, nobody seems to know him and yet take him for granted till he dies. The whole thing appears to be quite strange. Again, the answers lie in the book. 3. The Pyramid Kurt is sleeping one night when the telephone rings. The scene starts with a Piper Cherokee, a single piston aircraft flying low and dropping something somewhere or getting somebody from/on the coast of Sweden. It turns and after a while crashes. Kurt is called to investigate it. Turns out, the plane was supposed to be destroyed. On crash, both the pilot and the passenger are into pieces so only dental records can prove who they are. Same day or a day or two later, two seemingly ordinary somewhat elderly women, spinsters, by all accounts, live above the shop where they sell buttons and all kinds of sewing needs of the town. They seem middle-class. Later the charred bodies of the two sisters are found :(. So here come the questions a.Did the plane drop something or pick something somebody up ? The Cherokee is a small plane so any plane field or something it could have landed up or if a place was somehow marked then could be dropped or picked up without actually landing. b. The firefighter suspects arson started at multiple places with the use of petrol? The question is why would somebody wanna do that? The sisters don t seem to be wealthy and practically everybody has bought stuff from them. They weren t popular but weren t also unpopular. c. Are the two crimes connected or unconnected? If connected, then how? d. Most important question, why the title Pyramid is given to the story. Why does the author share the name Pyramid. Does he mean the same or the original thing? He could have named it triangle. Again, answers to all the above can be found in the book. One thing I also became very aware of during reading the book that it is difficult to understand people s behavior and what they do. And this is without even any criminality involved in. Let s say for e.g. I die in some mysterious circumstances, the possibility of the police finding my actions in last days would be limited and this is when I have hearing loss. And this probably is more to do with how our minds are wired. And most people I know are much more privacy conscious/aware than I am.

Japan s Hikikomori Japan has been a curious country. It was more or less a colonizer and somewhat of a feared power till it dragged the U.S. unnecessarily in World War 2. The result of the two atom bombs and the restitution meant that Japan had to build again from the ground up. It is also in a seismically unstable place as they have frequent earthquakes although the buildings are hardened/balanced to make sure that vibrations don t tear buildings apart. Had seen years ago on Natgeo a documentary that explains all that. Apart from that, Japan was helped by the Americans and there was good kinship between them till the 1980s till it signed the Plaza Accord which enhanced asset price bubbles that eventually burst. Something from which they are smarting even today. Japan has a constitutional monarchy. A somewhat history lesson or why it exists even today can be found here. Asset price bubbles of the 1980s, more than 50 percent of the population on zero hour contracts and the rest tend to suffer from overwork. There is a term called Karoshi that explains all. An Indian pig-pen would be two, two and a half times larger than a typical Japanese home. Most Japanese live in micro-apartments called konbachiku . All of the above stresses meant that lately many young Japanese people have become Hikikomori. Bloomberg featured about the same a couple of years back. I came to know about it as many Indians are given the idea of Japan being a successful country without knowing the ills and issues it faces. Even in that most women get the wrong end of the short stick i.e. even it they manage to find jobs, it would be most back-breaking menial work. The employment statistics of Japan s internal ministry tells its own story.

If you look at the data above, it seems that the between 2002 and 2019, the share of zero hour contracts has increased while regular work has decreased. This also means that those on the bottom of the ladder can no longer afford a home. There is and was a viral video called Lost in Manboo that went viral few years ago. It is a perfect set of storms. Add to that the Fukushima nuclear incident about which I had shared a few years ago. While the workers are blamed but all design decisions are taken by the management. And as was shown in numerous movies, documentaries etc. Interestingly, and somewhat ironically, the line workers knew the correct things to do and correct decisions to take unlike the management. The shut-ins story is almost a decade or two decades old. It is similar story in South Korea but not as depressive as the in Japan. It is somewhat depressive story but needed to be shared. The stories shared in the bloomberg article makes your heart ache

Backpacks In and around 2015, I had bought a Targus backpack, very much similar to the Targus TSB194US-70 Motor 16-inch Backpack. That bag has given me a lot of comfort over the years but now has become frayed the zip sometimes work and sometimes doesn t. Unlike those days there are a bunch of companies now operating in India. There are eight different companies that I came to know about, Aircase, Harrisons Sirius, HP Oddyssey, Mokobara, Artic Hunter, Dell Pro Hybrid, Dell Roller Backpack and lastly the Decathlon Quechua Hiking backpack 32L NH Escape 500 . Now of all the above, two backpacks seem the best, the first one is Harrisons Sirius, with 45L capacity, I don t think I would need another bag at all. The runner-up is the Decathlon Quecha Hiking Backpack 32L. One of the better things in all the bags is that all have hidden pockets for easy taking in and out of passport while having being ant-theft. I do not have to stress how stressful it is to take out the passport and put it back in. Almost all the vendors have made sure that it is not a stress point anymore. The good thing about the Quecha is that they are giving 10 years warranty, the point to be asked is if that is does the warranty cover the zip. Zips are the first thing that goes out in bags.That actually has what happened to my current bag. Decathlon has a store in Wakad, Pune while I have reached out to the gentleman in charge of Harrisons India to see if they have a reseller in Pune. So hopefully, in next one week I should have a backpack that isn t spilling with things all over the place, whichever I m able to figure out.

16 October 2022

Sven Hoexter: CentOS 9, stunnel, an openssl memory leak and a VirtualBox crash

tl;dr; OpenSSL 3.0.1 leaks memory in ssl3_setup_write_buffer(), seems to be fixed in 3.0.5 3.0.2. The issue manifests at least in stunnel and keepalived on CentOS 9. In addition I learned the hard way that running a not so recent VirtualBox version on Debian bullseye let to dh parameter generation crashing in libcrypto in bn_sqr8x_internal(). A recent rabbit hole I went down. The actual bug in openssl was nailed down and documented by Quentin Armitage on GitHub in keepalived My bugreport with all back and forth in the RedHat Bugzilla is #2128412. Act I - Hello stunnel, this is the OOMkiller Calling We started to use stunnel on Google Cloud compute engine instances running CentOS 9. The loadbalancer in front of those instances used a TCP health check to validate the backend availability. A day or so later the stunnel instances got killed by the OOMkiller. Restarting stunnel and looking into /proc/<pid>/smaps showed a heap segment growing quite quickly. Act II - Reproducing the Issue While I'm not the biggest fan of VirtualBox and Vagrant I've to admit it's quite nice to just fire up a VM image, and give other people a chance to recreate that setup as well. Since VirtualBox is no longer released with Debian/stable I just recompiled what was available in unstable at the time of the bullseye release, and used that. That enabled me now to just start a CentOS 9 VM, setup stunnel with a minimal config, grab netcat and a for loop and watch the memory grow. E.g. while true; do nc -z localhost 2600; sleep 1; done To my surprise, in addition to the memory leak, I also observed some crashes but did not yet care too much about those. Act III - Wrong Suspect, a Workaround and Bugreporting Of course the first idea was that something must be wrong in stunnel itself. But I could not find any recent bugreports. My assumption is that there are still a few people around using CentOS and stunnel, so someone else should probably have seen it before. Just to be sure I recompiled the latest stunnel package from Fedora. Didn't change anything. Next I recompiled it without almost all the patches Fedora/RedHat carries. Nope, no progress. Next idea: Maybe this is related to the fact that we do not initiate a TLS context after connecting? So we changed the test case from nc to openssl s_client, and the loadbalancer healthcheck from TCP to a TLS based one. Tada, a workaround, no more memory leaking. In addition I gave Fedora a try (they have Vagrant Virtualbox images in the "Cloud" Spin, e.g. here for Fedora 36) and my local Debian installation a try. No leaks experienced on both. Next I reported #2128412. Act IV - Crash in libcrypto and a VirtualBox Bug When I moved with the test case from the Google Cloud compute instance to my local VM I encountered some crashes. That morphed into a real problem when I started to run stunnel with gdb and valgrind. All crashes happened in libcrypto bn_sqr8x_internal() when generating new dh parameter (stunnel does that for you if you do not use static dh parameter). I quickly worked around that by generating static dh parameter for stunnel. After some back and forth I suspected VirtualBox as the culprit. Recompiling the current VirtualBox version (6.1.38-dfsg-3) from unstable on bullseye works without any changes. Upgrading actually fixed that issue. Epilog I highly appreciate that RedHat, with all the bashing around the future of CentOS, still works on community contributed bugreports. My kudos go to Clemens Lang. :) Now that the root cause is clear, I guess RedHat will push out a fix for the openssl 3.0.1 based release they have in RHEL/CentOS 9. Until that is available at least stunnel and keepalived are known to be affected. If you run stunnel on something public it's not that pretty, because already a low rate of TCP connections will result in a DoS condition.

7 October 2022

Reproducible Builds: Reproducible Builds in September 2022

Welcome to the September 2022 report from the Reproducible Builds project! In our reports we try to outline the most important things that we have been up to over the past month. As a quick recap, whilst anyone may inspect the source code of free software for malicious flaws, almost all software is distributed to end users as pre-compiled binaries. If you are interested in contributing to the project, please visit our Contribute page on our website.
David A. Wheeler reported to us that the US National Security Agency (NSA), Cybersecurity and Infrastructure Security Agency (CISA) and the Office of the Director of National Intelligence (ODNI) have released a document called Securing the Software Supply Chain: Recommended Practices Guide for Developers (PDF). As David remarked in his post to our mailing list, it expressly recommends having reproducible builds as part of advanced recommended mitigations . The publication of this document has been accompanied by a press release.
Holger Levsen was made aware of a small Microsoft project called oss-reproducible. Part of, OSSGadget, a larger collection of tools for analyzing open source packages , the purpose of oss-reproducible is to:
analyze open source packages for reproducibility. We start with an existing package (for example, the NPM left-pad package, version 1.3.0), and we try to answer the question, Do the package contents authentically reflect the purported source code?
More details can be found in the README.md file within the code repository.
David A. Wheeler also pointed out that there are some potential upcoming changes to the OpenSSF Best Practices badge for open source software in relation to reproducibility. Whilst the badge programme has three certification levels ( passing , silver and gold ), the gold level includes the criterion that The project MUST have a reproducible build . David reported that some projects have argued that this reproducibility criterion should be slightly relaxed as outlined in an issue on the best-practices-badge GitHub project. Essentially, though, the claim is that the reproducibility requirement doesn t make sense for projects that do not release built software, and that timestamp differences by themselves don t necessarily indicate malicious changes. Numerous pragmatic problems around excluding timestamps were raised in the discussion of the issue.
Sonatype, a pioneer of software supply chain management , issued a press release month to report that they had found:
[ ] a massive year-over-year increase in cyberattacks aimed at open source project ecosystems. According to early data from Sonatype s 8th annual State of the Software Supply Chain Report, which will be released in full this October, Sonatype has recorded an average 700% jump in repository attacks over the last three years.
More information is available in the press release.
A number of changes were made to the Reproducible Builds website and documentation this month, including Chris Lamb adding a redirect from /projects/ to /who/ in order to keep old or archived links working [ ], Jelle van der Waa added a Rust programming language example for SOURCE_DATE_EPOCH [ ][ ] and Mattia Rizzolo included Protocol Labs amongst our project-level sponsors [ ].

Debian There was a large amount of reproducibility work taking place within Debian this month:
  • The nfft source package was removed from the archive, and now all packages in Debian bookworm now have a corresponding .buildinfo file. This can be confirmed and tracked on the associated page on the tests.reproducible-builds.org site.
  • Vagrant Cascadian announced on our mailing list an informal online sprint to help clear the huge backlog of reproducible builds patches submitted by performing NMU (Non-Maintainer Uploads). The first such sprint took place on September 22nd with the following results:
    • Holger Levsen:
      • Mailed #1010957 in man-db asking for an update and whether to remove the patch tag for now. This was subsequently removed and the maintainer started to address the issue.
      • Uploaded gmp to DELAYED/15, fixing #1009931.
      • Emailed #1017372 in plymouth and asked for the maintainer s opinion on the patch. This resulted in the maintainer improving Vagrant s original patch (and uploading it) as well as filing an issue upstream.
      • Uploaded time to DELAYED/15, fixing #983202.
    • Vagrant Cascadian:
      • Verify and updated patch for mylvmbackup (#782318)
      • Verified/updated patches for libranlip. (#788000, #846975 & #1007137)
      • Uploaded libranlip to DELAYED/10.
      • Verified patch for cclive. (#824501)
      • Uploaded cclive to DELAYED/10.
      • Vagrant was unable to reproduce the underlying issue within #791423 (linuxtv-dvb-apps) and so the bug was marked as done .
      • Researched #794398 (in clhep).
    The plan is to repeat these sprints every two weeks, with the next taking place on Thursday October 6th at 16:00 UTC on the #debian-reproducible IRC channel.
  • Roland Clobus posted his 13th update of the status of reproducible Debian ISO images on our mailing list. During the last month, Roland ensured that the live images are now automatically fed to openQA for automated testing after they have been shown to be reproducible. Additionally Roland asked on the debian-devel mailing list about a way to determine the canonical timestamp of the Debian archive. [ ]
  • Following up on last month s work on reproducible bootstrapping, Holger Levsen filed two bugs against the debootstrap and cdebootstrap utilities. (#1019697 & #1019698)
Lastly, 44 reviews of Debian packages were added, 91 were updated and 17 were removed this month adding to our knowledge about identified issues. A number of issue types have been updated too, including the descriptions of cmake_rpath_contains_build_path [ ], nondeterministic_version_generated_by_python_param [ ] and timestamps_in_documentation_generated_by_org_mode [ ]. Furthermore, two new issue types were created: build_path_used_to_determine_version_or_package_name [ ] and captures_build_path_via_cmake_variables [ ].

Other distributions In openSUSE, Bernhard M. Wiedemann published his usual openSUSE monthly report.

diffoscope diffoscope is our in-depth and content-aware diff utility. Not only can it locate and diagnose reproducibility issues, it can provide human-readable diffs from many kinds of binary formats. This month, Chris Lamb prepared and uploaded versions 222 and 223 to Debian, as well as made the following changes:
  • The cbfstools utility is now provided in Debian via the coreboot-utils package so we can enable that functionality within Debian. [ ]
  • Looked into Mach-O support.
  • Fixed the try.diffoscope.org service by addressing a compatibility issue between glibc/seccomp that was preventing the Docker-contained diffoscope instance from spawning any external processes whatsoever [ ]. I also updated the requirements.txt file, as some of the specified packages were no longer available [ ][ ].
In addition Jelle van der Waa added support for file version 5.43 [ ] and Mattia Rizzolo updated the packaging:
  • Also include coreboot-utils in the Build-Depends and Test-Depends fields so that it is available for tests. [ ]
  • Use pep517 and pip to load the requirements. [ ]
  • Remove packages in Breaks/Replaces that have been obsoleted since the release of Debian bullseye. [ ]

Reprotest reprotest is our end-user tool to build the same source code twice in widely and deliberate different environments, and checking whether the binaries produced by the builds have any differences. This month, reprotest version 0.7.22 was uploaded to Debian unstable by Holger Levsen, which included the following changes by Philip Hands:
  • Actually ensure that the setarch(8) utility can actually execute before including an architecture to test. [ ]
  • Include all files matching *.*deb in the default artifact_pattern in order to archive all results of the build. [ ]
  • Emit an error when building the Debian package if the Debian packaging version does not patch the Python version of reprotest. [ ]
  • Remove an unneeded invocation of the head(1) utility. [ ]

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Testing framework The Reproducible Builds project runs a significant testing framework at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. This month, however, the following changes were made:
  • Holger Levsen:
    • Add a job to build reprotest from Git [ ] and use the correct Git branch when building it [ ].
  • Mattia Rizzolo:
    • Enable syncing of results from building live Debian ISO images. [ ]
    • Use scp -p in order to preserve modification times when syncing live ISO images. [ ]
    • Apply the shellcheck shell script analysis tool. [ ]
    • In a build node wrapper script, remove some debugging code which was messing up calling scp(1) correctly [ ] and consquently add support to use both scp -p and regular scp [ ].
  • Roland Clobus:
    • Track and handle the case where the Debian archive gets updated between two live image builds. [ ]
    • Remove a call to sudo(1) as it is not (or no longer) required to delete old live-build results. [ ]

Contact As ever, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

29 September 2022

Antoine Beaupr : Detecting manual (and optimizing large) package installs in Puppet

Well this is a mouthful. I recently worked on a neat hack called puppet-package-check. It is designed to warn about manually installed packages, to make sure "everything is in Puppet". But it turns out it can (probably?) dramatically decrease the bootstrap time of Puppet bootstrap when it needs to install a large number of packages.

Detecting manual packages On a cleanly filed workstation, it looks like this:
root@emma:/home/anarcat/bin# ./puppet-package-check -v
listing puppet packages...
listing apt packages...
loading apt cache...
0 unmanaged packages found
A messy workstation will look like this:
root@curie:/home/anarcat/bin# ./puppet-package-check -v
listing puppet packages...
listing apt packages...
loading apt cache...
288 unmanaged packages found
apparmor-utils beignet-opencl-icd bridge-utils clustershell cups-pk-helper davfs2 dconf-cli dconf-editor dconf-gsettings-backend ddccontrol ddrescueview debmake debootstrap decopy dict-devil dict-freedict-eng-fra dict-freedict-eng-spa dict-freedict-fra-eng dict-freedict-spa-eng diffoscope dnsdiag dropbear-initramfs ebtables efibootmgr elpa-lua-mode entr eog evince figlet file file-roller fio flac flex font-manager fonts-cantarell fonts-inconsolata fonts-ipafont-gothic fonts-ipafont-mincho fonts-liberation fonts-monoid fonts-monoid-tight fonts-noto fonts-powerline fonts-symbola freeipmi freetype2-demos ftp fwupd-amd64-signed gallery-dl gcc-arm-linux-gnueabihf gcolor3 gcp gdisk gdm3 gdu gedit gedit-plugins gettext-base git-debrebase gnome-boxes gnote gnupg2 golang-any golang-docker-credential-helpers golang-golang-x-tools grub-efi-amd64-signed gsettings-desktop-schemas gsfonts gstreamer1.0-libav gstreamer1.0-plugins-base gstreamer1.0-plugins-good gstreamer1.0-plugins-ugly gstreamer1.0-pulseaudio gtypist gvfs-backends hackrf hashcat html2text httpie httping hugo humanfriendly iamerican-huge ibus ibus-gtk3 ibus-libpinyin ibus-pinyin im-config imediff img2pdf imv initramfs-tools input-utils installation-birthday internetarchive ipmitool iptables iptraf-ng jackd2 jupyter jupyter-nbextension-jupyter-js-widgets jupyter-qtconsole k3b kbtin kdialog keditbookmarks keepassxc kexec-tools keyboard-configuration kfind konsole krb5-locales kwin-x11 leiningen lightdm lintian linux-image-amd64 linux-perf lmodern lsb-base lvm2 lynx lz4json magic-wormhole mailscripts mailutils manuskript mat2 mate-notification-daemon mate-themes mime-support mktorrent mp3splt mpdris2 msitools mtp-tools mtree-netbsd mupdf nautilus nautilus-sendto ncal nd ndisc6 neomutt net-tools nethogs nghttp2-client nocache npm2deb ntfs-3g ntpdate nvme-cli nwipe obs-studio okular-extra-backends openstack-clients openstack-pkg-tools paprefs pass-extension-audit pcmanfm pdf-presenter-console pdf2svg percol pipenv playerctl plymouth plymouth-themes popularity-contest progress prometheus-node-exporter psensor pubpaste pulseaudio python3-ldap qjackctl qpdfview qrencode r-cran-ggplot2 r-cran-reshape2 rake restic rhash rpl rpm2cpio rs ruby ruby-dev ruby-feedparser ruby-magic ruby-mocha ruby-ronn rygel-playbin rygel-tracker s-tui sanoid saytime scrcpy scrcpy-server screenfetch scrot sdate sddm seahorse shim-signed sigil smartmontools smem smplayer sng sound-juicer sound-theme-freedesktop spectre-meltdown-checker sq ssh-audit sshuttle stress-ng strongswan strongswan-swanctl syncthing system-config-printer system-config-printer-common system-config-printer-udev systemd-bootchart systemd-container tardiff task-desktop task-english task-ssh-server tasksel tellico texinfo texlive-fonts-extra texlive-lang-cyrillic texlive-lang-french texlive-lang-german texlive-lang-italian texlive-xetex tftp-hpa thunar-archive-plugin tidy tikzit tint2 tintin++ tipa tpm2-tools traceroute tree trocla ucf udisks2 unifont unrar-free upower usbguard uuid-runtime vagrant-cachier vagrant-libvirt virt-manager vmtouch vorbis-tools w3m wamerican wamerican-huge wfrench whipper whohas wireshark xapian-tools xclip xdg-user-dirs-gtk xlax xmlto xsensors xserver-xorg xsltproc xxd xz-utils yubioath-desktop zathura zathura-pdf-poppler zenity zfs-dkms zfs-initramfs zfsutils-linux zip zlib1g zlib1g-dev
157 old: apparmor-utils clustershell davfs2 dconf-cli dconf-editor ddccontrol ddrescueview decopy dnsdiag ebtables efibootmgr elpa-lua-mode entr figlet file-roller fio flac flex font-manager freetype2-demos ftp gallery-dl gcc-arm-linux-gnueabihf gcolor3 gcp gdu gedit git-debrebase gnote golang-docker-credential-helpers golang-golang-x-tools gtypist hackrf hashcat html2text httpie httping hugo humanfriendly iamerican-huge ibus ibus-pinyin imediff input-utils internetarchive ipmitool iptraf-ng jackd2 jupyter-qtconsole k3b kbtin kdialog keditbookmarks keepassxc kexec-tools kfind konsole leiningen lightdm lynx lz4json magic-wormhole manuskript mat2 mate-notification-daemon mktorrent mp3splt msitools mtp-tools mtree-netbsd nautilus nautilus-sendto nd ndisc6 neomutt net-tools nethogs nghttp2-client nocache ntpdate nwipe obs-studio openstack-pkg-tools paprefs pass-extension-audit pcmanfm pdf-presenter-console pdf2svg percol pipenv playerctl qjackctl qpdfview qrencode r-cran-ggplot2 r-cran-reshape2 rake restic rhash rpl rpm2cpio rs ruby-feedparser ruby-magic ruby-mocha ruby-ronn s-tui saytime scrcpy screenfetch scrot sdate seahorse shim-signed sigil smem smplayer sng sound-juicer spectre-meltdown-checker sq ssh-audit sshuttle stress-ng system-config-printer system-config-printer-common tardiff tasksel tellico texlive-lang-cyrillic texlive-lang-french tftp-hpa tikzit tint2 tintin++ tpm2-tools traceroute tree unrar-free vagrant-cachier vagrant-libvirt vmtouch vorbis-tools w3m wamerican wamerican-huge wfrench whipper whohas xdg-user-dirs-gtk xlax xmlto xsensors xxd yubioath-desktop zenity zip
131 new: beignet-opencl-icd bridge-utils cups-pk-helper dconf-gsettings-backend debmake debootstrap dict-devil dict-freedict-eng-fra dict-freedict-eng-spa dict-freedict-fra-eng dict-freedict-spa-eng diffoscope dropbear-initramfs eog evince file fonts-cantarell fonts-inconsolata fonts-ipafont-gothic fonts-ipafont-mincho fonts-liberation fonts-monoid fonts-monoid-tight fonts-noto fonts-powerline fonts-symbola freeipmi fwupd-amd64-signed gdisk gdm3 gedit-plugins gettext-base gnome-boxes gnupg2 golang-any grub-efi-amd64-signed gsettings-desktop-schemas gsfonts gstreamer1.0-libav gstreamer1.0-plugins-base gstreamer1.0-plugins-good gstreamer1.0-plugins-ugly gstreamer1.0-pulseaudio gvfs-backends ibus-gtk3 ibus-libpinyin im-config img2pdf imv initramfs-tools installation-birthday iptables jupyter jupyter-nbextension-jupyter-js-widgets keyboard-configuration krb5-locales kwin-x11 lintian linux-image-amd64 linux-perf lmodern lsb-base lvm2 mailscripts mailutils mate-themes mime-support mpdris2 mupdf ncal npm2deb ntfs-3g nvme-cli okular-extra-backends openstack-clients plymouth plymouth-themes popularity-contest progress prometheus-node-exporter psensor pubpaste pulseaudio python3-ldap ruby ruby-dev rygel-playbin rygel-tracker sanoid scrcpy-server sddm smartmontools sound-theme-freedesktop strongswan strongswan-swanctl syncthing system-config-printer-udev systemd-bootchart systemd-container task-desktop task-english task-ssh-server texinfo texlive-fonts-extra texlive-lang-german texlive-lang-italian texlive-xetex thunar-archive-plugin tidy tipa trocla ucf udisks2 unifont upower usbguard uuid-runtime virt-manager wireshark xapian-tools xclip xserver-xorg xsltproc xz-utils zathura zathura-pdf-poppler zfs-dkms zfs-initramfs zfsutils-linux zlib1g zlib1g-dev
Yuck! That's a lot of shit to go through. Notice how the packages get sorted between "old" and "new" packages. This is because popcon is used as a tool to mark which packages are "old". If you have unmanaged packages, the "old" ones are likely things that you can uninstall, for example. If you don't have popcon installed, you'll also get this warning:
popcon stats not available: [Errno 2] No such file or directory: '/var/log/popularity-contest'
The error can otherwise be safely ignored, but you won't get "help" prioritizing the packages to add to your manifests. Note that the tool ignores packages that were "marked" (see apt-mark(8)) as automatically installed. This implies that you might have to do a little bit of cleanup the first time you run this, as Debian doesn't necessarily mark all of those packages correctly on first install. For example, here's how it looks like on a clean install, after Puppet ran:
root@angela:/home/anarcat# ./bin/puppet-package-check -v
listing puppet packages...
listing apt packages...
loading apt cache...
127 unmanaged packages found
ca-certificates console-setup cryptsetup-initramfs dbus file gcc-12-base gettext-base grub-common grub-efi-amd64 i3lock initramfs-tools iw keyboard-configuration krb5-locales laptop-detect libacl1 libapparmor1 libapt-pkg6.0 libargon2-1 libattr1 libaudit-common libaudit1 libblkid1 libbpf0 libbsd0 libbz2-1.0 libc6 libcap-ng0 libcap2 libcap2-bin libcom-err2 libcrypt1 libcryptsetup12 libdb5.3 libdebconfclient0 libdevmapper1.02.1 libedit2 libelf1 libext2fs2 libfdisk1 libffi8 libgcc-s1 libgcrypt20 libgmp10 libgnutls30 libgpg-error0 libgssapi-krb5-2 libhogweed6 libidn2-0 libip4tc2 libiw30 libjansson4 libjson-c5 libk5crypto3 libkeyutils1 libkmod2 libkrb5-3 libkrb5support0 liblocale-gettext-perl liblockfile-bin liblz4-1 liblzma5 libmd0 libmnl0 libmount1 libncurses6 libncursesw6 libnettle8 libnewt0.52 libnftables1 libnftnl11 libnl-3-200 libnl-genl-3-200 libnl-route-3-200 libnss-systemd libp11-kit0 libpam-systemd libpam0g libpcre2-8-0 libpcre3 libpcsclite1 libpopt0 libprocps8 libreadline8 libselinux1 libsemanage-common libsemanage2 libsepol2 libslang2 libsmartcols1 libss2 libssl1.1 libssl3 libstdc++6 libsystemd-shared libsystemd0 libtasn1-6 libtext-charwidth-perl libtext-iconv-perl libtext-wrapi18n-perl libtinfo6 libtirpc-common libtirpc3 libudev1 libunistring2 libuuid1 libxtables12 libxxhash0 libzstd1 linux-image-amd64 logsave lsb-base lvm2 media-types mlocate ncurses-term pass-extension-otp puppet python3-reportbug shim-signed tasksel ucf usr-is-merged util-linux-extra wpasupplicant xorg zlib1g
popcon stats not available: [Errno 2] No such file or directory: '/var/log/popularity-contest'
Normally, there should be unmanaged packages here. But because of the way Debian is installed, a lot of libraries and some core packages are marked as manually installed, and are of course not managed through Puppet. There are two solutions to this problem:
  • really manage everything in Puppet (argh)
  • mark packages as automatically installed
I typically chose the second path and mark a ton of stuff as automatic. Then either they will be auto-removed, or will stop being listed. In the above scenario, one could mark all libraries as automatically installed with:
apt-mark auto $(./bin/puppet-package-check   grep -o 'lib[^ ]*')
... but if you trust that most of that stuff is actually garbage that you don't really want installed anyways, you could just mark it all as automatically installed:
apt-mark auto $(./bin/puppet-package-check)
In my case, that ended up keeping basically all libraries (because of course they're installed for some reason) and auto-removing this:
dh-dkms discover-data dkms libdiscover2 libjsoncpp25 libssl1.1 linux-headers-amd64 mlocate pass-extension-otp pass-otp plocate x11-apps x11-session-utils xinit xorg
You'll notice xorg in there: yep, that's bad. Not what I wanted. But for some reason, on other workstations, I did not actually have xorg installed. Turns out having xserver-xorg is enough, and that one has dependencies. So now I guess I just learned to stop worrying and live without X(org).

Optimizing large package installs But that, of course, is not all. Why make things simple when you can have an unreadable title that is trying to be both syntactically correct and click-baity enough to flatter my vain ego? Right. One of the challenges in bootstrapping Puppet with large package lists is that it's slow. Puppet lists packages as individual resources and will basically run apt install $PKG on every package in the manifest, one at a time. While the overhead of apt is generally small, when you add things like apt-listbugs, apt-listchanges, needrestart, triggers and so on, it can take forever setting up a new host. So for initial installs, it can actually makes sense to skip the queue and just install everything in one big batch. And because the above tool inspects the packages installed by Puppet, you can run it against a catalog and have a full lists of all the packages Puppet would install, even before I even had Puppet running. So when reinstalling my laptop, I basically did this:
apt install puppet-agent/experimental
puppet agent --test --noop
apt install $(./puppet-package-check --debug \
    2>&1   grep ^puppet\ packages 
      sed 's/puppet packages://;s/ /\n/g'
      grep -v -e onionshare -e golint -e git-sizer -e github-backup -e hledger -e xsane -e audacity -e chirp -e elpa-flycheck -e elpa-lsp-ui -e yubikey-manager -e git-annex -e hopenpgp-tools -e puppet
) puppet-agent/experimental
That massive grep was because there are currently a lot of packages missing from bookworm. Those are all packages that I have in my catalog but that still haven't made it to bookworm. Sad, I know. I eventually worked around that by adding bullseye sources so that the Puppet manifest actually ran. The point here is that this improves the Puppet run time a lot. All packages get installed at once, and you get a nice progress bar. Then you actually run Puppet to deploy configurations and all the other goodies:
puppet agent --test
I wish I could tell you how much faster that ran. I don't know, and I will not go through a full reinstall just to please your curiosity. The only hard number I have is that it installed 444 packages (which exploded in 10,191 packages with dependencies) in a mere 10 minutes. That might also be with the packages already downloaded. In any case, I have that gut feeling it's faster, so you'll have to just trust my gut. It is, after all, much more important than you might think.

Similar work The blueprint system is something similar to this:
It figures out what you ve done manually, stores it locally in a Git repository, generates code that s able to recreate your efforts, and helps you deploy those changes to production
That tool has unfortunately been abandoned for a decade at this point. Also note that the AutoRemove::RecommendsImportant and AutoRemove::SuggestsImportant are relevant here. If it is set to true (the default), a package will not be removed if it is (respectively) a Recommends or Suggests of another package (as opposed to the normal Depends). In other words, if you want to also auto-remove packages that are only Suggests, you would, for example, add this to apt.conf:
AutoRemove::SuggestsImportant false;
Paul Wise has tried to make the Debian installer and debootstrap properly mark packages as automatically installed in the past, but his bug reports were rejected. The other suggestions in this section are also from Paul, thanks!

25 September 2022

Shirish Agarwal: Rama II, Arthur C. Clarke, Aliens

Rama II This would be more of a short post about the current book I am reading. Now people who have seen Arrival would probably be more at home. People who have also seen Avatar would also be familiar to the theme or concept I am sharing about. Now before I go into detail, it seems that Arthur C. Clarke wanted to use a powerful god or mythological character for the name and that is somehow the RAMA series started. Now the first book in the series explores an extraterrestrial spaceship that earth people see/connect with. The spaceship is going somewhere and is doing an Earth flyby so humans don t have much time to explore the spaceship and it is difficult to figure out how the spaceship worked. The spaceship is around 40 km. long. They don t meet any living Ramans but mostly automated systems and something called biots. As I m still reading it, I can t really say what happens next. Although in Rama or Rama I, the powers that be want to destroy it while in the end last they don t. Whether they could have destroyed it or not would be whole another argument. What people need to realize is that the book is a giant What IF scenario.

Aliens If there were any intelligent life in the Universe, I don t think they will take the pain of visiting Earth. And the reasons are far more mundane than anything else. Look at how we treat each other. One of the largest democracies on Earth, The U.S. has been so divided. While the progressives have made some good policies, the Republicans are into political stunts, consider the political stunt of sending Refugees to Martha s Vineyard. The ex-president also made a statement that he can declassify anything just by thinking about it. Now understand this, a refugee is a legal migrant whose papers would be looked into by the American Govt. and till the time he/she/their application is approved or declined they can work, have a house, or do whatever to support themselves. There is a huge difference between having refugee status and being an undocumented migrant. And it isn t as if the Republicans don t know this, they did it because they thought they will be able to get away with it. Both the above episodes don t throw us in a good light. If we treat others like the above, how can we expect to be treated? And refugees always have a hard time, not just in the U.S, , the UK you name it. The UK just some months ago announced a controversial deal where they will send Refugees to Rwanda while their refugee application is accepted or denied, most of them would be denied. The Indian Government is more of the same. A friend, a casual acquaintance Nishant Shah shared the same issues as I had shared a few weeks back even though he s an NRI. So, it seems we are incapable of helping ourselves as well as helping others. On top of it, we have the temerity of using the word alien for them. Now, just for a moment, imagine you are an intelligent life form. An intelligent life-form that could coax energy from the stars, why would you come to Earth, where the people at large have already destroyed more than half of the atmosphere and still arguing about it with the other half. On top of it, we see a list of authoritarian figures like Putin, Xi Jinping whose whole idea is to hold on to power for as long as they can, damn the consequences. Mr. Modi is no different, he is the dumbest of the lot and that s saying something. Most of the projects made by him are in disarray, Pune Metro, my city giving an example. And this is when Pune was the first applicant to apply for a Metro. Just like the UK, India too has tanked the economy under his guidance. Every time they come closer to target dates, the targets are put far into the future, for e.g. now they have said 2040 for a good economy. And just like in other countries, he has some following even though he has a record of failure in every sector of the economy, education, and defense, the list is endless. There isn t a single accomplishment by him other than screwing with other religions. Most of my countrymen also don t really care or have a bother to see how the economy grows and how exports play a crucial part otherwise they would be more alert. Also, just like the UK, India too gave tax cuts to the wealthy, most people don t understand how economies function and the PM doesn t care. The media too is subservient and because nobody asks the questions, nobody seems to be accountable :(.

Religion There is another aspect that also has been to the fore, just like in medieval times, I see a great fervor for religion happening here, especially since the pandemic and people are much more insecure than ever before. Before, I used to think that insecurity and religious appeal only happen in the uneducated, and I was wrong. I have friends who are highly educated and yet still are blinded by religion. In many such cases or situations, I find their faith to be a sham. If you have faith, then there shouldn t be any room for doubt or insecurity. And if you are not in doubt or insecure, you won t need to talk about your religion. The difference between the two is that a person is satiated himself/herself/themselves with thirst and hunger. That person would be in a relaxed mode while the other person would continue to create drama as there is no peace in their heart. Another fact is none of the major religions, whether it is Christianity, Islam, Buddhism or even Hinduism has allowed for the existence of extraterrestrials. We have already labeled them as aliens even before meeting them & just our imagination. And more often than not, we end up killing them. There are and have been scores of movies that have explored the idea. Independence day, Aliens, Arrival, the list goes on and on. And because our religions have never thought about the idea of ET s and how they will affect us, if ET s do come, all the religions and religious practices would panic and die. That is the possibility why even the 1947 Roswell Incident has been covered up . If the above was not enough, the bombing of Hiroshima and Nagasaki by the Americans would always be a black mark against humanity. From the alien perspective, if you look at the technology that they have vis-a-vis what we have, they will probably think of us as spoilt babies and they wouldn t be wrong. Spoilt babies with nuclear weapons are not exactly a healthy mix

Earth To add to our fragile ego, we didn t even leave earth even though we have made sure we exploit it as much as we can. We even made the anthropocentric or homocentric view that makes man the apex animal and to top it we have this weird idea that extraterrestrials come here or will invade for water. A species that knows how to get energy out of stars but cannot make a little of H2O. The idea belies logic and again has been done to death. Why we as humans are so insecure even though we have been given so much I fail to understand. I have shared on numerous times the Kardeshev Scale on this blog itself. The above are some of the reasons why Arthur C. Clarke s works are so controversial and this is when I haven t even read the whole book. It forces us to ask questions that we normally would never think about. And I have to repeat that when these books were published for the first time, they were new ideas. All the movies, from Stanley Kubrick s 2001: Space Odyssey, Aliens, Arrival, and Avatar, somewhere or the other reference some aspect of this work. It is highly possible that I may read and re-read the book couple of times before beginning the next one. There is also quite a bit of human drama, but then that is to be expected. I have to admit I did have some nice dreams after reading just the first few pages, imagining being given the opportunity to experience an Extraterrestrial spaceship that is beyond our wildest dreams. While the Governments may try to cover up or something, the ones who get to experience that spacecraft would be unimaginable. And if they were able to share the pictures or a Livestream, it would be nothing short of amazing. For those who want to, there is a lot going on with the New James Webb Telescope. I am sure it would give rise to more questions than answers.

22 September 2022

Jonathan Dowland: Nine Inch Nails, Cornwall, June

In June I travelled to see Nine Inch Nails perform two nights at the Eden Project in Cornwall. It'd been eight years since I last saw them live and when they announced the Eden shows, I thought it might be the only chance I'd get to see them for a long time. I committed, and sods law, a week or so later they announced a handful of single-night UK club shows. On the other hand, on previous tours where they'd typically book two club nights in each city, I've attended one night and always felt I should have done both, so this time I was making that happen. Newquay
approach by air approach by air
Towan Beach (I think) Towan Beach (I think)
For personal reasons it's been a difficult year so it was nice to treat myself to a mini holiday. I stayed in Newquay, a seaside town with many similarities to the North East coast, as well as many differences. It's much bigger, and although we have a thriving surfing community in Tynemouth, Newquay have it on another level. They also have a lot more tourism, which is a double-edged sword: in Newquay, besides surfing, there was not a lot to do. There's a lot of tourist tat shops, and bars and cafes (som very nice ones), but no book shops, no record shops, very few of the quaint, unique boutique places we enjoy up here and possibly take for granted. If you want tie-dyed t-shirts though, you're sorted. Nine Inch Nails have a long-established, independently fan-run forum called Echoing The Sound. There is now also an official Discord server. I asked on both whether anyone was around in Newquay and wanted to meet up: not many people were! But I did meet a new friend, James, for a quiet drink. He was due to share a taxi with Sarah, who was flying in but her flight was delayed and she had to figure out another route. Eden Project
the Eden Project the Eden Project
The Eden Project, the venue itself, is a fascinating place. I didn't realise until I'd planned most of my time there that the gig tickets granted you free entry into the Project on the day of the gig as well as the day after. It was quite tricky to get from Newquay to the Eden project, I would have been better off staying in St Austell itself perhaps, so I didn't take advantage of this, but I did have a couple of hours total to explore a little bit at the venue before the gig on each night. Friday 17th (sunny) Once I got to the venue I managed to meet up with several names from ETS and the Discord: James, Sarah (who managed to re-arrange flights), Pete and his wife (sorry I missed your name), Via Tenebrosa (she of crab hat fame), Dave (DaveDiablo), Elliot and his sister and finally James (sheapdean), someone who I've been talking to online for over a decade and finally met in person (and who taped both shows). I also tried to meet up with a friend from the Debian UK community (hi Lief) but I couldn't find him! Support for Friday was Nitzer Ebb, who I wasn't familiar with before. There were two men on stage, one operating instruments, the other singing. It was a tough time to warm up the crowd, the venue was still very empty and it was very bright and sunny, but I enjoyed what I was hearing. They're definitely on my list. I later learned that the band's regular singer (Doug McCarthy) was unable to make it, and so the guy I was watching (Bon Harris) was standing in for full vocal duties. This made the performance (and their subsequent one at Hellfest the week after) all the more impressive.
pic of the band
Via (with crab hat), Sarah, me (behind). pic by kraw Via (with crab hat), Sarah, me (behind). pic by kraw
(Day) and night one, Thursday, was very hot and sunny and the band seemed a little uncomfortable exposed on stage with little cover. Trent commented as such at least once. The setlist was eclectic: and I finally heard some of my white whale songs. Highlights for me were The Perfect Drug, which was unplayed from 1997-2018 and has now become a staple, and the second ever performance of Everything, the first being a few days earlier. Also notable was three cuts in a row from the last LP, Bad Witch, Heresy and Love Is Not Enough. Saturday 18th (rain)
with Elliot, before with Elliot, before
Day/night 2, Friday, was rainy all day. Support was Yves Tumor, who were an interesting clash of styles: a Prince/Bowie-esque inspired lead clashing with a rock-out lead guitarist styling himself similarly to Brian May. I managed to find Sarah, Elliot (new gig best-buddy), Via and James (sheapdean) again. Pete was at this gig too, but opted to take a more relaxed position than the rail this time. I also spent a lot of time talking to a Canadian guy on a press pass (both nights) that I'm ashamed to have forgotten his name. The dank weather had Nine Inch Nails in their element. I think night one had the more interesting setlist, but night two had the best performance, hands down. Highlights for me were mostly a string of heavier songs (in rough order of scarcity, from common to rarely played): wish, burn, letting you, reptile, every day is exactly the same, the line begins to blur, and finally, happiness in slavery, the first UK performance since 1994. This was a crushing set. A girl in front of me was really suffering with the cold and rain after waiting at the venue all day to get a position on the rail. I thought she was going to pass out. A roadie with NIN noticed, and came over and gave her his jacket. He said if she waited to the end of the show and returned his jacket he'd give her a setlist, and true to his word, he did. This was a really nice thing to happen and really gave the impression that the folks who work on these shows are caring people.
Yep I was this close Yep I was this close
A fuckin' rainbow! Photo by "Lazereth of Nazereth"
Afterwards Afterwards
Night two did have some gentler songs and moments to remember: a re-arranged Sanctified (which ended a nineteen-year hiatus in 2013) And All That Could Have Been (recorded 2002, first played 2018), La Mer, during which the rain broke and we were presented with a beautiful pink-hued rainbow. They then segued into Less Than, providing the comic moment of the night when Trent noticed the rainbow mid-song; now a meme that will go down in NIN fan history. Wrap-up This was a blow-out, once in a lifetime trip to go and see a band who are at the top of their career in terms of performance. One problem I've had with NIN gigs in the past is suffering gig flashback to them when I go to other (inferior) gigs afterwards, and I'm pretty sure I will have this problem again. Doing both nights was worth it, the two experiences were very different and each had its own unique moments. The venue was incredible, and Cornwall is (modulo tourist trap stuff) beautiful.

9 September 2022

Reproducible Builds: Reproducible Builds in August 2022

Welcome to the August 2022 report from the Reproducible Builds project! In these reports we outline the most important things that we have been up to over the past month. As a quick recap, whilst anyone may inspect the source code of free software for malicious flaws, almost all software is distributed to end users as pre-compiled binaries. The motivation behind the reproducible builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised. As ever, if you are interested in contributing to the project, please visit our Contribute page on our website.

Community news As announced last month, registration is currently open for our in-person summit this year which is due to be held between November 1st November 3rd. The event will take place in Venice (Italy). Very soon we intend to pick a venue reachable via the train station and an international airport. However, the precise venue will depend on the number of attendees. Please see the announcement email for information about how to register.
The US National Security Agency (NSA), Cybersecurity and Infrastructure Security Agency (CISA) and the Office of the Director of National Intelligence (ODNI) have released a document called Securing the Software Supply Chain: Recommended Practices Guide for Developers (PDF) as part of their Enduring Security Framework (ESF) work. The document expressly recommends having reproducible builds as part of advanced recommended mitigations, along with hermetic builds. Page 31 (page 35 in the PDF) says:
Reproducible builds provide additional protection and validation against attempts to compromise build systems. They ensure the binary products of each build system match: i.e., they are built from the same source, regardless of variable metadata such as the order of input files, timestamps, locales, and paths. Reproducible builds are those where re-running the build steps with identical input artifacts results in bit-for-bit identical output. Builds that cannot meet this must provide a justification why the build cannot be made reproducible.
The full press release is available online.
On our mailing list this month, Marc Prud hommeaux posted a feature request for diffoscope which additionally outlines a project called The App Fair, an autonomous distribution network of free and open-source macOS and iOS applications, where validated apps are then signed and submitted for publication .
Author/blogger Cory Doctorow posted published a provocative blog post this month titled Your computer is tormented by a wicked god . Touching on Ken Thompson s famous talk, Reflections on Trusting Trust , the early goals of Secure Computing and UEFI firmware interfaces:
This is the core of a two-decade-old debate among security people, and it s one that the benevolent God faction has consistently had the upper hand in. They re the curated computing advocates who insist that preventing you from choosing an alternative app store or side-loading a program is for your own good because if it s possible for you to override the manufacturer s wishes, then malicious software may impersonate you to do so, or you might be tricked into doing so. [..] This benevolent dictatorship model only works so long as the dictator is both perfectly benevolent and perfectly competent. We know the dictators aren t always benevolent. [ ] But even if you trust a dictator s benevolence, you can t trust in their perfection. Everyone makes mistakes. Benevolent dictator computing works well, but fails badly. Designing a computer that intentionally can t be fully controlled by its owner is a nightmare, because that is a computer that, once compromised, can attack its owner with impunity.

Lastly, Chengyu HAN updated the Reproducible Builds website to correct an incorrect Git command. [ ]

Debian In Debian this month, the essential and required package sets became 100% reproducible in Debian bookworm on the amd64 and arm64 architectures. These two subsets of the full Debian archive refer to Debian package priority levels as described in the 2.5 Priorities section of the Debian Policy there is no canonical minimal installation package set in Debian due to its diverse methods of installation. As it happens, these package sets are not reproducible on the i386 architecture because the ncurses package on that architecture is not yet reproducible, and the sed package currently fails to build from source on armhf too. The full list of reproducible packages within these package sets can be viewed within our QA system, such as on the page of required packages in amd64 and the list of essential packages on arm64, both for Debian bullseye.
It recently has become very easy to install reproducible Debian Docker containers using podman on Debian bullseye:
$ sudo apt install podman
$ podman run --rm -it debian:bullseye bash
The (pre-built) image used is itself built using debuerrotype, as explained on docker.debian.net. This page also details how to build the image yourself and what checksums are expected if you do so.
Related to this, it has also become straightforward to reproducibly bootstrap Debian using mmdebstrap, a replacement for the usual debootstrap tool to create Debian root filesystems:
$ SOURCE_DATE_EPOCH=$(date --utc --date=2022-08-29 +%s) mmdebstrap unstable > unstable.tar
This works for (at least) Debian unstable, bullseye and bookworm, and is tested automatically by a number of QA jobs set up by Holger Levsen (unstable, bookworm and bullseye)
Work has also taken place to ensure that the canonical debootstrap and cdebootstrap tools are also capable of bootstrapping Debian reproducibly, although it currently requires a few extra steps:
  1. Clamping the modification time of files that are newer than $SOURCE_DATE_EPOCH to be not greater than SOURCE_DATE_EPOCH.
  2. Deleting a few files. For debootstrap, this requires the deletion of /etc/machine-id, /var/cache/ldconfig/aux-cache, /var/log/dpkg.log, /var/log/alternatives.log and /var/log/bootstrap.log, and for cdebootstrap we also need to delete the /var/log/apt/history.log and /var/log/apt/term.log files as well.
This process works at least for unstable, bullseye and bookworm and is now being tested automatically by a number of QA jobs setup by Holger Levsen [ ][ ][ ][ ][ ][ ]. As part of this work, Holger filed two bugs to request a better initialisation of the /etc/machine-id file in both debootstrap [ ] and cdebootstrap [ ].
Elsewhere in Debian, 131 reviews of Debian packages were added, 20 were updated and 27 were removed this month, adding to our extensive knowledge about identified issues. Chris Lamb added a number of issue types, including: randomness_in_browserify_output [ ], haskell_abi_hash_differences [ ], nondeterministic_ids_in_html_output_generated_by_python_sphinx_panels [ ]. Lastly, Mattia Rizzolo removed the deterministic flag from the captures_kernel_variant flag [ ].

Other distributions Vagrant Cascadian posted an update of the status of Reproducible Builds in GNU Guix, writing that:
Ignoring the pesky unknown packages, it is more like ~93% reproducible and ~7% unreproducible... that feels a bit better to me! These numbers wander around over time, mostly due to packages moving back into an "unknown" state while the build farms catch up with each other... although the above numbers seem to have been pretty consistent over the last few days.
The post itself contains a lot more details, including a brief discussion of tooling. Elsewhere in GNU Guix, however, Vagrant updated a number of packages such as itpp [ ], perl-class-methodmaker [ ], libnet [ ], directfb [ ] and mm-common [ ], as well as updated the version of reprotest to 0.7.21 [ ]. In openSUSE, Bernhard M. Wiedemann published his usual openSUSE monthly report.

diffoscope diffoscope is our in-depth and content-aware diff utility. Not only can it locate and diagnose reproducibility issues, it can provide human-readable diffs from many kinds of binary formats. This month, Chris Lamb prepared and uploaded versions 220 and 221 to Debian, as well as made the following changes:
  • Update external_tools.py to reflect changes to xxd and the vim-common package. [ ]
  • Depend on the dedicated xxd package now, not the vim-common package. [ ]
  • Don t crash if we can open a PDF file using the PyPDF library, but cannot subsequently parse the annotations within. [ ]
In addition, Vagrant Cascadian updated diffoscope in GNU Guix, first to to version 220 [ ] and later to 221 [ ].

Community news The Reproducible Builds project aims to fix as many currently-unreproducible packages as possible as well as to send all of our patches upstream wherever appropriate. This month we created a number of patches, including:

Testing framework The Reproducible Builds project runs a significant testing framework at tests.reproducible-builds.org, to check packages and other artifacts for reproducibility. This month, Holger Levsen made the following changes:
  • Debian-related changes:
    • Temporarily add Debian unstable deb-src lines to enable test builds a Non-maintainer Upload (NMU) campaign targeting 708 sources without .buildinfo files found in Debian unstable, including 475 in bookworm. [ ][ ]
    • Correctly deal with the Debian Edu packages not being installable. [ ]
    • Finally, stop scheduling stretch. [ ]
    • Make sure all Ubuntu nodes have the linux-image-generic kernel package installed. [ ]
  • Health checks & view:
    • Detect SSH login problems. [ ]
    • Only report the first uninstallable package set. [ ]
    • Show new bootstrap jobs. [ ] and debian-live jobs. [ ] in the job health view.
    • Fix regular expression to detect various zombie jobs. [ ]
  • New jobs:
    • Add a new job to test reproducibility of mmdebstrap bootstrapping tool. [ ][ ][ ][ ]
    • Run our new mmdebstrap job remotely [ ][ ]
    • Improve the output of the mmdebstrap job. [ ][ ][ ]
    • Adjust the mmdebstrap script to additionally support debootstrap as well. [ ][ ][ ]
    • Work around mmdebstrap and debootstrap keeping logfiles within their artifacts. [ ][ ][ ]
    • Add support for testing cdebootstrap too and add such a job for unstable. [ ][ ][ ]
    • Use a reproducible value for SOURCE_DATE_EPOCH for all our new bootstrap jobs. [ ]
  • Misc changes:
    • Send the create_meta_pkg_sets notification to #debian-reproducible-changes instead of #debian-reproducible. [ ]
In addition, Roland Clobus re-enabled the tests for live-build images [ ] and added a feature where the build would retry instead of give up when the archive was synced whilst building an ISO [ ], and Vagrant Cascadian added logging to report the current target of the /bin/sh symlink [ ].

Contact As ever, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

6 September 2022

Louis-Philippe V ronneau: Montreal's Debian & Stuff - August 2022

Our local Debian user group gathered on Sunday August 28th1 at the very hackish Foulab for the August 2022 edition of our "Debian & Stuff" meetings. As always, the event was a success and we had lots of fun. Nine people showed up, including some new faces and people I hadn't seen in a while: On my side, although I was badly sleep-deprived 2, I still managed to be somewhat productive! One of the WiFi Access Points we use in our 4-apartment LAN had been boot-looping for a few weeks, after a failed sysupgrade to the latest version of OpenWRT. lavamind and I suspect the flash got corrupted in a way or another during the upgrade process... Lucky for us, this model has a serial port and runs U-Boot. After a bit of tinkering, some electrical tape and two different serial adapters3, we managed to identify the pin layout and got a shell on the machine. The device has a reset button, but since the kernel panic was happening too soon in the boot process, we weren't able to get into OpenWRT's failsafe mode this way. The WiFi AP being flashed via a serial-to-USB adapter Once we had serial access, wiping the flash and re-installing OpenWRT fixed our problem. A quick ansible-playbook run later, the device was back to being usable and configured :) I was too tired to keep track of what others did, but I took some nice pictures of the pizza we got, and of this nice blow-up Tux wearing a Foulab t-shirt. Enjoy! A blow-up Tux wearing a Foulab t-shirt One of the pizzas we ordered As always, thanks to the Debian project for granting us a budget to rent the venue and to buy some food.

  1. Please excuse the late blog post, it's Harvest Season here and I've been quite busy.
  2. A bad case of wry neck kept me from sleeping properly for a while in August.
  3. As it turns out, serial connections work better when you use the right pins for TX and RX!

14 August 2022

Russ Allbery: Review: Still Not Safe

Review: Still Not Safe, by Robert L. Wears & Kathleen M. Sutcliffe
Publisher: Oxford University Press
Copyright: November 2019
ISBN: 0-19-027128-0
Format: Kindle
Pages: 232
Still Not Safe is an examination of the recent politics and history of patient safety in medicine. Its conclusions are summarized by the opening paragraph of the preface:
The American moral and social philosopher Eric Hoffer reportedly said that every great cause begins as a movement, becomes a business, and eventually degenerates into a racket. The reform movement to make healthcare safer is clearly a great cause, but patient safety efforts are increasingly following Hoffer's path.
Robert Wears was Professor of Emergency Medicine at the University of Florida specializing in patient safety. Kathleen Sutcliffe is Professor of Medicine and Business at Johns Hopkins. This book is based on research funded by a grant from the Robert Wood Johnson Foundation, for which both Wears and Sutcliffe were primary investigators. (Wears died in 2017, but the acknowledgments imply that at least early drafts of the book existed by that point and it was indeed co-written.) The anchor of the story of patient safety in Still Not Safe is the 1999 report from the Institute of Medicine entitled To Err is Human, to which the authors attribute an explosion of public scrutiny of medical safety. The headline conclusion of that report, which led nightly news programs after its release, was that 44,000 to 120,000 people died each year in the United States due to medical error. This report prompted government legislation, funding for new safety initiatives, a flurry of follow-on reports, and significant public awareness of medical harm. What it did not produce, in the authors' view, is significant improvements in patient safety. The central topic of this book is an analysis of why patient safety efforts have had so little measurable effect. The authors attribute this to three primary causes: an unwillingness to involve safety experts from outside medicine or absorb safety lessons from other disciplines, an obsession with human error that led to profound misunderstandings of the nature of safety, and the misuse of safety concerns as a means to centralize control of medical practice in the hands of physician-administrators. (The term used by the authors is "managerial, scientific-bureaucratic medicine," which is technically accurate but rather awkward.) Biggest complaint first: This book desperately needed examples, case studies, or something to make these ideas concrete. There are essentially none in 230 pages apart from passing mentions of famous cases of medical error that added to public pressure, and a tantalizing but maddeningly nonspecific discussion of the atypically successful effort to radically improve the safety of anesthesia. Apparently anesthesiologists involved safety experts from outside medicine, avoided a focus on human error, turned safety into an engineering problem, and made concrete improvements that had a hugely positive impact on the number of adverse events for patients. Sounds fascinating! Alas, I'm just as much in the dark about what those improvements were as I was when I started reading this book. Apart from a vague mention of some unspecified improvements to anesthesia machines, there are no concrete descriptions whatsoever. I understand that the authors were probably leery of giving too many specific examples of successful safety initiatives since one of their core points is that safety is a mindset and philosophy rather than a replicable set of actions, and copying the actions of another field without understanding their underlying motivations or context within a larger system is doomed to failure. But you have to give the reader something, or the book starts feeling like a flurry of abstract assertions. Much is made here of the drawbacks of a focus on human error, and the superiority of the safety analysis done in other fields that have moved beyond error-centric analysis (and in some cases have largely discarded the word "error" as inherently unhelpful and ambiguous). That leads naturally to showing an analysis of an adverse incident through an error lens and then through a more nuanced safety lens, making the differences concrete for the reader. It was maddening to me that the authors never did this. This book was recommended to me as part of a discussion about safety and reliability in tech and the need to learn from safety practices in other fields. In that context, I didn't find it useful, although surprisingly that's because the thinking in medicine (at least as presented by these authors) seems behind the current thinking in distributed systems. The idea that human error is not a useful model for approaching reliability is standard in large tech companies, nearly all of which use blameless postmortems for exactly that reason. Tech, similar to medicine, does have a tendency to be insular and not look outside the field for good ideas, but the approach to large-scale reliability in tech seems to have avoided the other traps discussed here. (Security is another matter, but security is also adversarial, which creates different problems that I suspect require different tools.) What I did find fascinating in this book, although not directly applicable to my own work, is the way in which a focus on human error becomes a justification for bureaucratic control and therefore a concentration of power in a managerial layer. If the assumption is that medical harm is primarily caused by humans making avoidable mistakes, and therefore the solution is to prevent humans from making mistakes through better training, discipline, or process, this creates organizations that are divided into those who make the rules and those who follow the rules. The long-term result is a practice of medicine in which a small number of experts decide the correct treatment for a given problem, and then all other practitioners are expected to precisely follow that treatment plan to avoid "errors." (The best distributed systems approaches may avoid this problem, but this failure mode seems nearly universal in technical support organizations.) I was startled by how accurate that portrayal of medicine felt. My assumption prior to reading this book was that the modern experience of medicine as an assembly line with patients as widgets was caused by the pressure for higher "productivity" and thus shorter visit times, combined with (in the US) the distorting effects of our broken medical insurance system. After reading this book, I've added a misguided way of thinking about medical error and risk avoidance to that analysis. One of the authors' points (which, as usual, I wish they'd made more concrete with a case study) is that the same thought process that lets a doctor make a correct diagnosis and find a working treatment is the thought process that may lead to an incorrect diagnosis or treatment. There is not a separable state of "mental error" that can be eliminated. Decision-making processes are more complicated and more integrated than that. If you try to prevent "errors" by eliminating flexibility, you also eliminate vital tools for successfully treating patients. The authors are careful to point out that the prior state of medicine in which each doctor was a force to themselves and there was no role for patient safety as a discipline was also bad for safety. Reverting to the state of medicine before the advent of the scientific-bureaucratic error-avoiding culture is also not a solution. But, rather at odds with other popular books about medicine, the authors are highly critical of safety changes focused on human error prevention, such as mandatory checklists. In their view, this is exactly the sort of attempt to blindly copy the machinery of safety in another field (in this case, air travel) without understanding the underlying purpose and system of which it's a part. I am not qualified to judge the sharp dispute over whether there is solid clinical evidence that checklists are helpful (these authors claim there is not; I know other books make different claims, and I suspect it may depend heavily on how the checklist is used). But I found the authors' argument that one has to design systems holistically for safety, not try to patch in safety later by turning certain tasks into rote processes and humans into machines, to be persuasive. I'm not willing to recommend this book given how devoid it is of concrete examples. I was able to fill in some of that because of prior experience with the literature on site reliability engineering, but a reader who wasn't previously familiar with discussions of safety or reliability may find much of this book too abstract to be comprehensible. But I'm not sorry I read it. I hadn't previously thought about the power dynamics of a focus on error, and I think that will be a valuable observation to keep in mind. Rating: 6 out of 10

4 August 2022

Reproducible Builds: Reproducible Builds in July 2022

Welcome to the July 2022 report from the Reproducible Builds project! In our reports we attempt to outline the most relevant things that have been going on in the past month. As a brief introduction, the reproducible builds effort is concerned with ensuring no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised. As ever, if you are interested in contributing to the project, please visit our Contribute page on our website.

Reproducible Builds summit 2022 Despite several delays, we are pleased to announce that registration is open for our in-person summit this year: November 1st November 3rd
The event will happen in Venice (Italy). We intend to pick a venue reachable via the train station and an international airport. However, the precise venue will depend on the number of attendees. Please see the announcement email for information about how to register.

Is reproducibility practical? Ludovic Court s published an informative blog post this month asking the important question: Is reproducibility practical?:
Our attention was recently caught by a nice slide deck on the methods and tools for reproducible research in the R programming language. Among those, the talk mentions Guix, stating that it is for professional, sensitive applications that require ultimate reproducibility , which is probably a bit overkill for Reproducible Research . While we were flattered to see Guix suggested as good tool for reproducibility, the very notion that there s a kind of reproducibility that is ultimate and, essentially, impractical, is something that left us wondering: What kind of reproducibility do scientists need, if not the ultimate kind? Is reproducibility practical at all, or is it more of a horizon?
The post goes on to outlines the concept of reproducibility, situating examples within the context of the GNU Guix operating system.

diffoscope diffoscope is our in-depth and content-aware diff utility. Not only can it locate and diagnose reproducibility issues, it can provide human-readable diffs from many kinds of binary formats. This month, Chris Lamb prepared and uploaded versions 218, 219 and 220 to Debian, as well as made the following changes:
  • New features:
  • Bug fixes:
    • Fix a regression introduced in version 207 where diffoscope would crash if one directory contained a directory that wasn t in the other. Thanks to Alderico Gallo for the testcase. [ ]
    • Don t traceback if we encounter an invalid Unicode character in Haskell versioning headers. [ ]
  • Output improvements:
  • Codebase improvements:
    • Space out a file a little. [ ]
    • Update various copyright years. [ ]

Mailing list On our mailing list this month:
  • Roland Clobus posted his Eleventh status update about reproducible [Debian] live-build ISO images, noting amongst many other things! that all major desktops build reproducibly with bullseye, bookworm and sid.
  • Santiago Torres-Arias announced a Call for Papers (CfP) for a new SCORED conference, an academic workshop around software supply chain security . As Santiago highlights, this new conference invites reviewers from industry, open source, governement and academia to review the papers [and] I think that this is super important to tackle the supply chain security task .

Upstream patches The Reproducible Builds project attempts to fix as many currently-unreproducible packages as possible. This month, however, we submitted the following patches:

Reprotest reprotest is the Reproducible Builds project s end-user tool to build the same source code twice in widely and deliberate different environments, and checking whether the binaries produced by the builds have any differences. This month, the following changes were made:
  • Holger Levsen:
    • Uploaded version 0.7.21 to Debian unstable as well as mark 0.7.22 development in the repository [ ].
    • Make diffoscope dependency unversioned as the required version is met even in Debian buster. [ ]
    • Revert an accidentally committed hunk. [ ]
  • Mattia Rizzolo:
    • Apply a patch from Nick Rosbrook to not force the tests to run only against Python 3.9. [ ]
    • Run the tests through pybuild in order to run them against all supported Python 3.x versions. [ ]
    • Fix a deprecation warning in the setup.cfg file. [ ]
    • Close a new Debian bug. [ ]

Reproducible builds website A number of changes were made to the Reproducible Builds website and documentation this month, including:
  • Arnout Engelen:
  • Chris Lamb:
    • Correct some grammar. [ ]
  • Holger Levsen:
    • Add talk from FOSDEM 2015 presented by Holger and Lunar. [ ]
    • Show date of presentations if we have them. [ ][ ]
    • Add my presentation from DebConf22 [ ] and from Debian Reunion Hamburg 2022 [ ].
    • Add dhole to the speakers of the DebConf15 talk. [ ]
    • Add raboof s talk Reproducible Builds for Trustworthy Binaries from May Contain Hackers. [ ]
    • Drop some Debian-related suggested ideas which are not really relevant anymore. [ ]
    • Add a link to list of packages with patches ready to be NMUed. [ ]
  • Mattia Rizzolo:
    • Add information about our upcoming event in Venice. [ ][ ][ ][ ]

Testing framework The Reproducible Builds project runs a significant testing framework at tests.reproducible-builds.org, to check packages and other artifacts for reproducibility. This month, Holger Levsen made the following changes:
  • Debian-related changes:
    • Create graphs displaying existing .buildinfo files per each Debian suite/arch. [ ][ ]
    • Fix a typo in the Debian dashboard. [ ][ ]
    • Fix some issues in the pkg-r package set definition. [ ][ ][ ]
    • Improve the builtin-pho HTML output. [ ][ ][ ][ ]
    • Temporarily disable all live builds as our snapshot mirror is offline. [ ]
  • Automated node health checks:
    • Detect dpkg failures. [ ]
    • Detect files with bad UNIX permissions. [ ]
    • Relax a regular expression in order to detect Debian Live image build failures. [ ]
  • Misc changes:
    • Test that FreeBSD virtual machine has been updated to version 13.1. [ ]
    • Add a reminder about powercycling the armhf-architecture mst0X node. [ ]
    • Fix a number of typos. [ ][ ]
    • Update documentation. [ ][ ]
    • Fix Munin monitoring configuration for some nodes. [ ]
    • Fix the static IP address for a node. [ ]
In addition, Vagrant Cascadian updated host keys for the cbxi4pro0 and wbq0 nodes [ ] and, finally, node maintenance was also performed by Mattia Rizzolo [ ] and Holger Levsen [ ][ ][ ].

Contact As ever, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

13 July 2022

Reproducible Builds: Reproducible Builds in June 2022

Welcome to the June 2022 report from the Reproducible Builds project. In these reports, we outline the most important things that we have been up to over the past month. As a quick recap, whilst anyone may inspect the source code of free software for malicious flaws, almost all software is distributed to end users as pre-compiled binaries.

Save the date! Despite several delays, we are pleased to announce dates for our in-person summit this year: November 1st 2022 November 3rd 2022
The event will happen in/around Venice (Italy), and we intend to pick a venue reachable via the train station and an international airport. However, the precise venue will depend on the number of attendees. Please see the announcement mail from Mattia Rizzolo, and do keep an eye on the mailing list for further announcements as it will hopefully include registration instructions.

News David Wheeler filed an issue against the Rust programming language to report that builds are not reproducible because full path to the source code is in the panic and debug strings . Luckily, as one of the responses mentions: the --remap-path-prefix solves this problem and has been used to great effect in build systems that rely on reproducibility (Bazel, Nix) to work at all and that there are efforts to teach cargo about it here .
The Python Security team announced that:
The ctx hosted project on PyPI was taken over via user account compromise and replaced with a malicious project which contained runtime code which collected the content of os.environ.items() when instantiating Ctx objects. The captured environment variables were sent as a base64 encoded query parameter to a Heroku application [ ]
As their announcement later goes onto state, version-pinning using hash-checking mode can prevent this attack, although this does depend on specific installations using this mode, rather than a prevention that can be applied systematically.
Developer vanitasvitae published an interesting and entertaining blog post detailing the blow-by-blow steps of debugging a reproducibility issue in PGPainless, a library which aims to make using OpenPGP in Java projects as simple as possible . Whilst their in-depth research into the internals of the .jar may have been unnecessary given that diffoscope would have identified the, it must be said that there is something to be said with occasionally delving into seemingly low-level details, as well describing any debugging process. Indeed, as vanitasvitae writes:
Yes, this would have spared me from 3h of debugging But I probably would also not have gone onto this little dive into the JAR/ZIP format, so in the end I m not mad.

Kees Cook published a short and practical blog post detailing how he uses reproducibility properties to aid work to replace one-element arrays in the Linux kernel. Kees approach is based on the principle that if a (small) proposed change is considered equivalent by the compiler, then the generated output will be identical but only if no other arbitrary or unrelated changes are introduced. Kees mentions the fantastic diffoscope tool, as well as various kernel-specific build options (eg. KBUILD_BUILD_TIMESTAMP) in order to prepare my build with the known to disrupt code layout options disabled .
Stefano Zacchiroli gave a presentation at GDR S curit Informatique based in part on a paper co-written with Chris Lamb titled Increasing the Integrity of Software Supply Chains. (Tweet)

Debian In Debian in this month, 28 reviews of Debian packages were added, 35 were updated and 27 were removed this month adding to our knowledge about identified issues. Two issue types were added: nondeterministic_checksum_generated_by_coq and nondetermistic_js_output_from_webpack. After Holger Levsen found hundreds of packages in the bookworm distribution that lack .buildinfo files, he uploaded 404 source packages to the archive (with no meaningful source changes). Currently bookworm now shows only 8 packages without .buildinfo files, and those 8 are fixed in unstable and should migrate shortly. By contrast, Debian unstable will always have packages without .buildinfo files, as this is how they come through the NEW queue. However, as these packages were not built on the official build servers (ie. they were uploaded by the maintainer) they will never migrate to Debian testing. In the future, therefore, testing should never have packages without .buildinfo files again. Roland Clobus posted yet another in-depth status report about his progress making the Debian Live images build reproducibly to our mailing list. In this update, Roland mentions that all major desktops build reproducibly with bullseye, bookworm and sid but also goes on to outline the progress made with automated testing of the generated images using openQA.

GNU Guix Vagrant Cascadian made a significant number of contributions to GNU Guix: Elsewhere in GNU Guix, Ludovic Court s published a paper in the journal The Art, Science, and Engineering of Programming called Building a Secure Software Supply Chain with GNU Guix:
This paper focuses on one research question: how can [Guix]((https://www.gnu.org/software/guix/) and similar systems allow users to securely update their software? [ ] Our main contribution is a model and tool to authenticate new Git revisions. We further show how, building on Git semantics, we build protections against downgrade attacks and related threats. We explain implementation choices. This work has been deployed in production two years ago, giving us insight on its actual use at scale every day. The Git checkout authentication at its core is applicable beyond the specific use case of Guix, and we think it could benefit to developer teams that use Git.
A full PDF of the text is available.

openSUSE In the world of openSUSE, SUSE announced at SUSECon that they are preparing to meet SLSA level 4. (SLSA (Supply chain Levels for Software Artifacts) is a new industry-led standardisation effort that aims to protect the integrity of the software supply chain.) However, at the time of writing, timestamps within RPM archives are not normalised, so bit-for-bit identical reproducible builds are not possible. Some in-toto provenance files published for SUSE s SLE-15-SP4 as one result of the SLSA level 4 effort. Old binaries are not rebuilt, so only new builds (e.g. maintenance updates) have this metadata added. Lastly, Bernhard M. Wiedemann posted his usual monthly openSUSE reproducible builds status report.

diffoscope diffoscope is our in-depth and content-aware diff utility. Not only can it locate and diagnose reproducibility issues, it can provide human-readable diffs from many kinds of binary formats. This month, Chris Lamb prepared and uploaded versions 215, 216 and 217 to Debian unstable. Chris Lamb also made the following changes:
  • New features:
    • Print profile output if we were called with --profile and we were killed via a TERM signal. This should help in situations where diffoscope is terminated due to some sort of timeout. [ ]
    • Support both PyPDF 1.x and 2.x. [ ]
  • Bug fixes:
    • Also catch IndexError exceptions (in addition to ValueError) when parsing .pyc files. (#1012258)
    • Correct the logic for supporting different versions of the argcomplete module. [ ]
  • Output improvements:
    • Don t leak the (likely-temporary) pathname when comparing PDF documents. [ ]
  • Logging improvements:
    • Update test fixtures for GNU readelf 2.38 (now in Debian unstable). [ ][ ]
    • Be more specific about the minimum required version of readelf (ie. binutils), as it appears that this patch level version change resulted in a change of output, not the minor version. [ ]
    • Use our @skip_unless_tool_is_at_least decorator (NB. at_least) over @skip_if_tool_version_is (NB. is) to fix tests under Debian stable. [ ]
    • Emit a warning if/when we are handling a UNIX TERM signal. [ ]
  • Codebase improvements:
    • Clarify in what situations the main finally block gets called with respect to TERM signal handling. [ ]
    • Clarify control flow in the diffoscope.profiling module. [ ]
    • Correctly package the scripts/ directory. [ ]
In addition, Edward Betts updated a broken link to the RSS on the diffoscope homepage and Vagrant Cascadian updated the diffoscope package in GNU Guix [ ][ ][ ].

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Testing framework The Reproducible Builds project runs a significant testing framework at tests.reproducible-builds.org, to check packages and other artifacts for reproducibility. This month, the following changes were made:
  • Holger Levsen:
    • Add a package set for packages that use the R programming language [ ] as well as one for Rust [ ].
    • Improve package set matching for Python [ ] and font-related [ ] packages.
    • Install the lz4, lzop and xz-utils packages on all nodes in order to detect running kernels. [ ]
    • Improve the cleanup mechanisms when testing the reproducibility of Debian Live images. [ ][ ]
    • In the automated node health checks, deprioritise the generic kernel warning . [ ]
  • Roland Clobus (Debian Live image reproducibility):
    • Add various maintenance jobs to the Jenkins view. [ ]
    • Cleanup old workspaces after 24 hours. [ ]
    • Cleanup temporary workspace and resulting directories. [ ]
    • Implement a number of fixes and improvements around publishing files. [ ][ ][ ]
    • Don t attempt to preserve the file timestamps when copying artifacts. [ ]
And finally, node maintenance was also performed by Mattia Rizzolo [ ].

Mailing list and website On our mailing list this month: Lastly, Chris Lamb updated the main Reproducible Builds website and documentation in a number of small ways, but primarily published an interview with Hans-Christoph Steiner of the F-Droid project. Chris Lamb also added a Coffeescript example for parsing and using the SOURCE_DATE_EPOCH environment variable [ ]. In addition, Sebastian Crane very-helpfully updated the screenshot of salsa.debian.org s request access button on the How to join the Salsa group. [ ]

Contact If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

1 July 2022

Paul Wise: FLOSS Activities June 2022

Focus This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Review
  • Spam: reported 5 Debian bug reports and 45 Debian mailing list posts
  • Debian wiki: RecentChanges for the month
  • Debian BTS usertags: changes for the month
  • Debian screenshots:

Administration
  • Debian wiki: unblock IP addresses, assist with account recovery, approve accounts

Communication

Sponsors The sptag work was sponsored. All other work was done on a volunteer basis.

6 June 2022

Reproducible Builds: Reproducible Builds in May 2022

Welcome to the May 2022 report from the Reproducible Builds project. In our reports we outline the most important things that we have been up to over the past month. As ever, if you are interested in contributing to the project, please visit our Contribute page on our website.

Repfix paper Zhilei Ren, Shiwei Sun, Jifeng Xuan, Xiaochen Li, Zhide Zhou and He Jiang have published an academic paper titled Automated Patching for Unreproducible Builds:
[..] fixing unreproducible build issues poses a set of challenges [..], among which we consider the localization granularity and the historical knowledge utilization as the most significant ones. To tackle these challenges, we propose a novel approach [called] RepFix that combines tracing-based fine-grained localization with history-based patch generation mechanisms.
The paper (PDF, 3.5MB) uses the Debian mylvmbackup package as an example to show how RepFix can automatically generate patches to make software build reproducibly. As it happens, Reiner Herrmann submitted a patch for the mylvmbackup package which has remained unapplied by the Debian package maintainer for over seven years, thus this paper inadvertently underscores that achieving reproducible builds will require both technical and social solutions.

Python variables Johannes Schauer discovered a fascinating bug where simply naming your Python variable _m led to unreproducible .pyc files. In particular, the types module in Python 3.10 requires the following patch to make it reproducible:
--- a/Lib/types.py
+++ b/Lib/types.py
@@ -37,8 +37,8 @@ _ag = _ag()
 AsyncGeneratorType = type(_ag)
 
 class _C:
-    def _m(self): pass
-MethodType = type(_C()._m)
+    def _b(self): pass
+MethodType = type(_C()._b)
Simply renaming the dummy method from _m to _b was enough to workaround the problem. Johannes bug report first led to a number of improvements in diffoscope to aid in dissecting .pyc files, but upstream identified this as caused by an issue surrounding interned strings and is being tracked in CPython bug #78274.

New SPDX team to incorporate build metadata in Software Bill of Materials SPDX, the open standard for Software Bill of Materials (SBOM), is continuously developed by a number of teams and committees. However, SPDX has welcomed a new addition; a team dedicated to enhancing metadata about software builds, complementing reproducible builds in creating a more secure software supply chain. The SPDX Builds Team has been working throughout May to define the universal primitives shared by all build systems, including the who, what, where and how of builds:
  • Who: the identity of the person or organisation that controls the build infrastructure.
  • What: the inputs and outputs of a given build, combining metadata about the build s configuration with an SBOM describing source code and dependencies.
  • Where: the software packages making up the build system, from build orchestration tools such as Woodpecker CI and Tekton to language-specific tools.
  • How: the invocation of a build, linking metadata of a build to the identity of the person or automation tool that initiated it.
The SPDX Builds Team expects to have a usable data model by September, ready for inclusion in the SPDX 3.0 standard. The team welcomes new contributors, inviting those interested in joining to introduce themselves on the SPDX-Tech mailing list.

Talks at Debian Reunion Hamburg Some of the Reproducible Builds team (Holger Levsen, Mattia Rizzolo, Roland Clobus, Philip Rinn, etc.) met in real life at the Debian Reunion Hamburg (official homepage). There were several informal discussions amongst them, as well as two talks related to reproducible builds. First, Holger Levsen gave a talk on the status of Reproducible Builds for bullseye and bookworm and beyond (WebM, 210MB): Secondly, Roland Clobus gave a talk called Reproducible builds as applied to non-compiler output (WebM, 115MB):

Supply-chain security attacks This was another bumper month for supply-chain attacks in package repositories. Early in the month, Lance R. Vick noticed that the maintainer of the NPM foreach package let their personal email domain expire, so they bought it and now controls foreach on NPM and the 36,826 projects that depend on it . Shortly afterwards, Drew DeVault published a related blog post titled When will we learn? that offers a brief timeline of major incidents in this area and, not uncontroversially, suggests that the correct way to ship packages is with your distribution s package manager .

Bootstrapping Bootstrapping is a process for building software tools progressively from a primitive compiler tool and source language up to a full Linux development environment with GCC, etc. This is important given the amount of trust we put in existing compiler binaries. This month, a bootstrappable mini-kernel was announced. Called boot2now, it comprises a series of compilers in the form of bootable machine images.

Google s new Assured Open Source Software service Google Cloud (the division responsible for the Google Compute Engine) announced a new Assured Open Source Software service. Noting the considerable 650% year-over-year increase in cyberattacks aimed at open source suppliers, the new service claims to enable enterprise and public sector users of open source software to easily incorporate the same OSS packages that Google uses into their own developer workflows . The announcement goes on to enumerate that packages curated by the new service would be:
  • Regularly scanned, analyzed, and fuzz-tested for vulnerabilities.
  • Have corresponding enriched metadata incorporating Container/Artifact Analysis data.
  • Are built with Cloud Build including evidence of verifiable SLSA-compliance
  • Are verifiably signed by Google.
  • Are distributed from an Artifact Registry secured and protected by Google.
(Full announcement)

A retrospective on the Rust programming language Andrew bunnie Huang published a long blog post this month promising a critical retrospective on the Rust programming language. Amongst many acute observations about the evolution of the language s syntax (etc.), the post beings to critique the languages approach to supply chain security ( Rust Has A Limited View of Supply Chain Security ) and reproducibility ( You Can t Reproduce Someone Else s Rust Build ):
There s some bugs open with the Rust maintainers to address reproducible builds, but with the number of issues they have to deal with in the language, I am not optimistic that this problem will be resolved anytime soon. Assuming the only driver of the unreproducibility is the inclusion of OS paths in the binary, one fix to this would be to re-configure our build system to run in some sort of a chroot environment or a virtual machine that fixes the paths in a way that almost anyone else could reproduce. I say almost anyone else because this fix would be OS-dependent, so we d be able to get reproducible builds under, for example, Linux, but it would not help Windows users where chroot environments are not a thing.
(Full post)

Reproducible Builds IRC meeting The minutes and logs from our May 2022 IRC meeting have been published. In case you missed this one, our next IRC meeting will take place on Tuesday 28th June at 15:00 UTC on #reproducible-builds on the OFTC network.

A new tool to improve supply-chain security in Arch Linux kpcyrd published yet another interesting tool related to reproducibility. Writing about the tool in a recent blog post, kpcyrd mentions that although many PKGBUILDs provide authentication in the context of signed Git tags (i.e. the ability to verify the Git tag was signed by one of the two trusted keys ), they do not support pinning, ie. that upstream could create a new signed Git tag with an identical name, and arbitrarily change the source code without the [maintainer] noticing . Conversely, other PKGBUILDs support pinning but not authentication. The new tool, auth-tarball-from-git, fixes both problems, as nearly outlined in kpcyrd s original blog post.

diffoscope diffoscope is our in-depth and content-aware diff utility. Not only can it locate and diagnose reproducibility issues, it can provide human-readable diffs from many kinds of binary formats. This month, Chris Lamb prepared and uploaded versions 212, 213 and 214 to Debian unstable. Chris also made the following changes:
  • New features:
    • Add support for extracting vmlinuz Linux kernel images. [ ]
    • Support both python-argcomplete 1.x and 2.x. [ ]
    • Strip sticky etc. from x.deb: sticky Debian binary package [ ]. [ ]
    • Integrate test coverage with GitLab s concept of artifacts. [ ][ ][ ]
  • Bug fixes:
    • Don t mask differences in .zip or .jar central directory extra fields. [ ]
    • Don t show a binary comparison of .zip or .jar files if we have observed at least one nested difference. [ ]
  • Codebase improvements:
    • Substantially update comment for our calls to zipinfo and zipinfo -v. [ ]
    • Use assert_diff in test_zip over calling get_data with a separate assert. [ ]
    • Don t call re.compile and then call .sub on the result; just call re.sub directly. [ ]
    • Clarify the comment around the difference between --usage and --help. [ ]
  • Testsuite improvements:
    • Test --help and --usage. [ ]
    • Test that --help includes the file formats. [ ]
Vagrant Cascadian added an external tool reference xb-tool for GNU Guix [ ] as well as updated the diffoscope package in GNU Guix itself [ ][ ][ ].

Distribution work In Debian, 41 reviews of Debian packages were added, 85 were updated and 13 were removed this month adding to our knowledge about identified issues. A number of issue types have been updated, including adding a new nondeterministic_ordering_in_deprecated_items_collected_by_doxygen toolchain issue [ ] as well as ones for mono_mastersummary_xml_files_inherit_filesystem_ordering [ ], extended_attributes_in_jar_file_created_without_manifest [ ] and apxs_captures_build_path [ ]. Vagrant Cascadian performed a rough check of the reproducibility of core package sets in GNU Guix, and in openSUSE, Bernhard M. Wiedemann posted his usual monthly reproducible builds status report.

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Reproducible builds website Chris Lamb updated the main Reproducible Builds website and documentation in a number of small ways, but also prepared and published an interview with Jan Nieuwenhuizen about Bootstrappable Builds, GNU Mes and GNU Guix. [ ][ ][ ][ ] In addition, Tim Jones added a link to the Talos Linux project [ ] and billchenchina fixed a dead link [ ].

Testing framework The Reproducible Builds project runs a significant testing framework at tests.reproducible-builds.org, to check packages and other artifacts for reproducibility. This month, the following changes were made:
  • Holger Levsen:
    • Add support for detecting running kernels that require attention. [ ]
    • Temporarily configure a host to support performing Debian builds for packages that lack .buildinfo files. [ ]
    • Update generated webpages to clarify wishes for feedback. [ ]
    • Update copyright years on various scripts. [ ]
  • Mattia Rizzolo:
    • Provide a facility so that Debian Live image generation can copy a file remotely. [ ][ ][ ][ ]
  • Roland Clobus:
    • Add initial support for testing generated images with OpenQA. [ ]
And finally, as usual, node maintenance was also performed by Holger Levsen [ ][ ].

Misc news On our mailing list this month:

Contact If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

27 May 2022

Reproducible Builds (diffoscope): diffoscope 214 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 214. This version includes the following changes:
[ Chris Lamb ]
* Support both python-argcomplete 1.x and 2.x.
[ Vagrant Cascadian ]
* Add external tool on GNU Guix for xb-tool.
You find out more by visiting the project homepage.

16 May 2022

Matthew Garrett: Can we fix bearer tokens?

Last month I wrote about how bearer tokens are just awful, and a week later Github announced that someone had managed to exfiltrate bearer tokens from Heroku that gave them access to, well, a lot of Github repositories. This has inevitably resulted in a whole bunch of discussion about a number of things, but people seem to be largely ignoring the fundamental issue that maybe we just shouldn't have magical blobs that grant you access to basically everything even if you've copied them from a legitimate holder to Honest John's Totally Legitimate API Consumer.

To make it clearer what the problem is here, let's use an analogy. You have a safety deposit box. To gain access to it, you simply need to be able to open it with a key you were given. Anyone who turns up with the key can open the box and do whatever they want with the contents. Unfortunately, the key is extremely easy to copy - anyone who is able to get hold of your keyring for a moment is in a position to duplicate it, and then they have access to the box. Wouldn't it be better if something could be done to ensure that whoever showed up with a working key was someone who was actually authorised to have that key?

To achieve that we need some way to verify the identity of the person holding the key. In the physical world we have a range of ways to achieve this, from simply checking whether someone has a piece of ID that associates them with the safety deposit box all the way up to invasive biometric measurements that supposedly verify that they're definitely the same person. But computers don't have passports or fingerprints, so we need another way to identify them.

When you open a browser and try to connect to your bank, the bank's website provides a TLS certificate that lets your browser know that you're talking to your bank instead of someone pretending to be your bank. The spec allows this to be a bi-directional transaction - you can also prove your identity to the remote website. This is referred to as "mutual TLS", or mTLS, and a successful mTLS transaction ends up with both ends knowing who they're talking to, as long as they have a reason to trust the certificate they were presented with.

That's actually a pretty big constraint! We have a reasonable model for the server - it's something that's issued by a trusted third party and it's tied to the DNS name for the server in question. Clients don't tend to have stable DNS identity, and that makes the entire thing sort of awkward. But, thankfully, maybe we don't need to? We don't need the client to be able to prove its identity to arbitrary third party sites here - we just need the client to be able to prove it's a legitimate holder of whichever bearer token it's presenting to that site. And that's a much easier problem.

Here's the simple solution - clients generate a TLS cert. This can be self-signed, because all we want to do here is be able to verify whether the machine talking to us is the same one that had a token issued to it. The client contacts a service that's going to give it a bearer token. The service requests mTLS auth without being picky about the certificate that's presented. The service embeds a hash of that certificate in the token before handing it back to the client. Whenever the client presents that token to any other service, the service ensures that the mTLS cert the client presented matches the hash in the bearer token. Copy the token without copying the mTLS certificate and the token gets rejected. Hurrah hurrah hats for everyone.

Well except for the obvious problem that if you're in a position to exfiltrate the bearer tokens you can probably just steal the client certificates and keys as well, and now you can pretend to be the original client and this is not adding much additional security. Fortunately pretty much everything we care about has the ability to store the private half of an asymmetric key in hardware (TPMs on Linux and Windows systems, the Secure Enclave on Macs and iPhones, either a piece of magical hardware or Trustzone on Android) in a way that avoids anyone being able to just steal the key.

How do we know that the key is actually in hardware? Here's the fun bit - it doesn't matter. If you're issuing a bearer token to a system then you're already asserting that the system is trusted. If the system is lying to you about whether or not the key it's presenting is hardware-backed then you've already lost. If it lied and the system is later compromised then sure all your apes get stolen, but maybe don't run systems that lie and avoid that situation as a result?

Anyway. This is covered in RFC 8705 so why aren't we all doing this already? From the client side, the largest generic issue is that TPMs are astonishingly slow in comparison to doing a TLS handshake on the CPU. RSA signing operations on TPMs can take around half a second, which doesn't sound too bad, except your browser is probably establishing multiple TLS connections to subdomains on the site it's connecting to and performance is going to tank. Fixing this involves doing whatever's necessary to convince the browser to pipe everything over a single TLS connection, and that's just not really where the web is right at the moment. Using EC keys instead helps a lot (~0.1 seconds per signature on modern TPMs), but it's still going to be a bottleneck.

The other problem, of course, is that ecosystem support for hardware-backed certificates is just awful. Windows lets you stick them into the standard platform certificate store, but the docs for this are hidden in a random PDF in a Github repo. Macs require you to do some weird bridging between the Secure Enclave API and the keychain API. Linux? Well, the standard answer is to do PKCS#11, and I have literally never met anybody who likes PKCS#11 and I have spent a bunch of time in standards meetings with the sort of people you might expect to like PKCS#11 and even they don't like it. It turns out that loading a bunch of random C bullshit that has strong feelings about function pointers into your security critical process is not necessarily something that is going to improve your quality of life, so instead you should use something like this and just have enough C to bridge to a language that isn't secretly plotting to kill your pets the moment you turn your back.

And, uh, obviously none of this matters at all unless people actually support it. Github has no support at all for validating the identity of whoever holds a bearer token. Most issuers of bearer tokens have no support for embedding holder identity into the token. This is not good! As of last week, all three of the big cloud providers support virtualised TPMs in their VMs - we should be running CI on systems that can do that, and tying any issued tokens to the VMs that are supposed to be making use of them.

So sure this isn't trivial. But it's also not impossible, and making this stuff work would improve the security of, well, everything. We literally have the technology to prevent attacks like Github suffered. What do we have to do to get people to actually start working on implementing that?

comment count unavailable comments

13 May 2022

Antoine Beaupr : BTRFS notes

I'm not a fan of BTRFS. This page serves as a reminder of why, but also a cheat sheet to figure out basic tasks in a BTRFS environment because those are not obvious to me, even after repeatedly having to deal with them. Content warning: there might be mentions of ZFS.

Stability concerns I'm worried about BTRFS stability, which has been historically ... changing. RAID-5 and RAID-6 are still marked unstable, for example. It's kind of a lucky guess whether your current kernel will behave properly with your planned workload. For example, in Linux 4.9, RAID-1 and RAID-10 were marked as "mostly OK" with a note that says:
Needs to be able to create two copies always. Can get stuck in irreversible read-only mode if only one copy can be made.
Even as of now, RAID-1 and RAID-10 has this note:
The simple redundancy RAID levels utilize different mirrors in a way that does not achieve the maximum performance. The logic can be improved so the reads will spread over the mirrors evenly or based on device congestion.
Granted, that's not a stability concern anymore, just performance. A reviewer of a draft of this article actually claimed that BTRFS only reads from one of the drives, which hopefully is inaccurate, but goes to show how confusing all this is. There are other warnings in the Debian wiki that are quite scary. Even the legendary Arch wiki has a warning on top of their BTRFS page, still. Even if those issues are now fixed, it can be hard to tell when they were fixed. There is a changelog by feature but it explicitly warns that it doesn't know "which kernel version it is considered mature enough for production use", so it's also useless for this. It would have been much better if BTRFS was released into the world only when those bugs were being completely fixed. Or that, at least, features were announced when they were stable, not just "we merged to mainline, good luck". Even now, we get mixed messages even in the official BTRFS documentation which says "The Btrfs code base is stable" (main page) while at the same time clearly stating unstable parts in the status page (currently RAID56). There are much harsher BTRFS critics than me out there so I will stop here, but let's just say that I feel a little uncomfortable trusting server data with full RAID arrays to BTRFS. But surely, for a workstation, things should just work smoothly... Right? Well, let's see the snags I hit.

My BTRFS test setup Before I go any further, I should probably clarify how I am testing BTRFS in the first place. The reason I tried BTRFS is that I was ... let's just say "strongly encouraged" by the LWN editors to install Fedora for the terminal emulators series. That, in turn, meant the setup was done with BTRFS, because that was somewhat the default in Fedora 27 (or did I want to experiment? I don't remember, it's been too long already). So Fedora was setup on my 1TB HDD and, with encryption, the partition table looks like this:
NAME                   MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda                      8:0    0 931,5G  0 disk  
 sda1                   8:1    0   200M  0 part  /boot/efi
 sda2                   8:2    0     1G  0 part  /boot
 sda3                   8:3    0   7,8G  0 part  
   fedora_swap        253:5    0   7.8G  0 crypt [SWAP]
 sda4                   8:4    0 922,5G  0 part  
   fedora_crypt       253:4    0 922,5G  0 crypt /
(This might not entirely be accurate: I rebuilt this from the Debian side of things.) This is pretty straightforward, except for the swap partition: normally, I just treat swap like any other logical volume and create it in a logical volume. This is now just speculation, but I bet it was setup this way because "swap" support was only added in BTRFS 5.0. I fully expect BTRFS experts to yell at me now because this is an old setup and BTRFS is so much better now, but that's exactly the point here. That setup is not that old (2018? old? really?), and migrating to a new partition scheme isn't exactly practical right now. But let's move on to more practical considerations.

No builtin encryption BTRFS aims at replacing the entire mdadm, LVM, and ext4 stack with a single entity, and adding new features like deduplication, checksums and so on. Yet there is one feature it is critically missing: encryption. See, my typical stack is actually mdadm, LUKS, and then LVM and ext4. This is convenient because I have only a single volume to decrypt. If I were to use BTRFS on servers, I'd need to have one LUKS volume per-disk. For a simple RAID-1 array, that's not too bad: one extra key. But for large RAID-10 arrays, this gets really unwieldy. The obvious BTRFS alternative, ZFS, supports encryption out of the box and mixes it above the disks so you only have one passphrase to enter. The main downside of ZFS encryption is that it happens above the "pool" level so you can typically see filesystem names (and possibly snapshots, depending on how it is built), which is not the case with a more traditional stack.

Subvolumes, filesystems, and devices I find BTRFS's architecture to be utterly confusing. In the traditional LVM stack (which is itself kind of confusing if you're new to that stuff), you have those layers:
  • disks: let's say /dev/nvme0n1 and nvme1n1
  • RAID arrays with mdadm: let's say the above disks are joined in a RAID-1 array in /dev/md1
  • volume groups or VG with LVM: the above RAID device (technically a "physical volume" or PV) is assigned into a VG, let's call it vg_tbbuild05 (multiple PVs can be added to a single VG which is why there is that abstraction)
  • LVM logical volumes: out of that volume group actually "virtual partitions" or "logical volumes" are created, that is where your filesystem lives
  • filesystem, typically with ext4: that's your normal filesystem, which treats the logical volume as just another block device
A typical server setup would look like this:
NAME                      MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
nvme0n1                   259:0    0   1.7T  0 disk  
 nvme0n1p1               259:1    0     8M  0 part  
 nvme0n1p2               259:2    0   512M  0 part  
   md0                     9:0    0   511M  0 raid1 /boot
 nvme0n1p3               259:3    0   1.7T  0 part  
   md1                     9:1    0   1.7T  0 raid1 
     crypt_dev_md1       253:0    0   1.7T  0 crypt 
       vg_tbbuild05-root 253:1    0    30G  0 lvm   /
       vg_tbbuild05-swap 253:2    0 125.7G  0 lvm   [SWAP]
       vg_tbbuild05-srv  253:3    0   1.5T  0 lvm   /srv
 nvme0n1p4               259:4    0     1M  0 part
I stripped the other nvme1n1 disk because it's basically the same. Now, if we look at my BTRFS-enabled workstation, which doesn't even have RAID, we have the following:
  • disk: /dev/sda with, again, /dev/sda4 being where BTRFS lives
  • filesystem: fedora_crypt, which is, confusingly, kind of like a volume group. it's where everything lives. i think.
  • subvolumes: home, root, /, etc. those are actually the things that get mounted. you'd think you'd mount a filesystem, but no, you mount a subvolume. that is backwards.
It looks something like this to lsblk:
NAME                   MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda                      8:0    0 931,5G  0 disk  
 sda1                   8:1    0   200M  0 part  /boot/efi
 sda2                   8:2    0     1G  0 part  /boot
 sda3                   8:3    0   7,8G  0 part  [SWAP]
 sda4                   8:4    0 922,5G  0 part  
   fedora_crypt       253:4    0 922,5G  0 crypt /srv
Notice how we don't see all the BTRFS volumes here? Maybe it's because I'm mounting this from the Debian side, but lsblk definitely gets confused here. I frankly don't quite understand what's going on, even after repeatedly looking around the rather dismal documentation. But that's what I gather from the following commands:
root@curie:/home/anarcat# btrfs filesystem show
Label: 'fedora'  uuid: 5abb9def-c725-44ef-a45e-d72657803f37
    Total devices 1 FS bytes used 883.29GiB
    devid    1 size 922.47GiB used 916.47GiB path /dev/mapper/fedora_crypt
root@curie:/home/anarcat# btrfs subvolume list /srv
ID 257 gen 108092 top level 5 path home
ID 258 gen 108094 top level 5 path root
ID 263 gen 108020 top level 258 path root/var/lib/machines
I only got to that point through trial and error. Notice how I use an existing mountpoint to list the related subvolumes. If I try to use the filesystem path, the one that's listed in filesystem show, I fail:
root@curie:/home/anarcat# btrfs subvolume list /dev/mapper/fedora_crypt 
ERROR: not a btrfs filesystem: /dev/mapper/fedora_crypt
ERROR: can't access '/dev/mapper/fedora_crypt'
Maybe I just need to use the label? Nope:
root@curie:/home/anarcat# btrfs subvolume list fedora
ERROR: cannot access 'fedora': No such file or directory
ERROR: can't access 'fedora'
This is really confusing. I don't even know if I understand this right, and I've been staring at this all afternoon. Hopefully, the lazyweb will correct me eventually. (As an aside, why are they called "subvolumes"? If something is a "sub" of "something else", that "something else" must exist right? But no, BTRFS doesn't have "volumes", it only has "subvolumes". Go figure. Presumably the filesystem still holds "files" though, at least empirically it doesn't seem like it lost anything so far. In any case, at least I can refer to this section in the future, the next time I fumble around the btrfs commandline, as I surely will. I will possibly even update this section as I get better at it, or based on my reader's judicious feedback.

Mounting BTRFS subvolumes So how did I even get to that point? I have this in my /etc/fstab, on the Debian side of things:
UUID=5abb9def-c725-44ef-a45e-d72657803f37   /srv    btrfs  defaults 0   2
This thankfully ignores all the subvolume nonsense because it relies on the UUID. mount tells me that's actually the "root" (? /?) subvolume:
root@curie:/home/anarcat# mount   grep /srv
/dev/mapper/fedora_crypt on /srv type btrfs (rw,relatime,space_cache,subvolid=5,subvol=/)
Let's see if I can mount the other volumes I have on there. Remember that subvolume list showed I had home, root, and var/lib/machines. Let's try root:
mount -o subvol=root /dev/mapper/fedora_crypt /mnt
Interestingly, root is not the same as /, it's a different subvolume! It seems to be the Fedora root (/, really) filesystem. No idea what is happening here. I also have a home subvolume, let's mount it too, for good measure:
mount -o subvol=home /dev/mapper/fedora_crypt /mnt/home
Note that lsblk doesn't notice those two new mountpoints, and that's normal: it only lists block devices and subvolumes (rather inconveniently, I'd say) do not show up as devices:
root@curie:/home/anarcat# lsblk 
NAME                   MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda                      8:0    0 931,5G  0 disk  
 sda1                   8:1    0   200M  0 part  
 sda2                   8:2    0     1G  0 part  
 sda3                   8:3    0   7,8G  0 part  
 sda4                   8:4    0 922,5G  0 part  
   fedora_crypt       253:4    0 922,5G  0 crypt /srv
This is really, really confusing. Maybe I did something wrong in the setup. Maybe it's because I'm mounting it from outside Fedora. Either way, it just doesn't feel right.

No disk usage per volume If you want to see what's taking up space in one of those subvolumes, tough luck:
root@curie:/home/anarcat# df -h  /srv /mnt /mnt/home
Filesystem                Size  Used Avail Use% Mounted on
/dev/mapper/fedora_crypt  923G  886G   31G  97% /srv
/dev/mapper/fedora_crypt  923G  886G   31G  97% /mnt
/dev/mapper/fedora_crypt  923G  886G   31G  97% /mnt/home
(Notice, in passing, that it looks like the same filesystem is mounted in different places. In that sense, you'd expect /srv and /mnt (and /mnt/home?!) to be exactly the same, but no: they are entirely different directory structures, which I will not call "filesystems" here because everyone's head will explode in sparks of confusion.) Yes, disk space is shared (that's the Size and Avail columns, makes sense). But nope, no cookie for you: they all have the same Used columns, so you need to actually walk the entire filesystem to figure out what each disk takes. (For future reference, that's basically:
root@curie:/home/anarcat# time du -schx /mnt/home /mnt /srv
124M    /mnt/home
7.5G    /mnt
875G    /srv
883G    total
real    2m49.080s
user    0m3.664s
sys 0m19.013s
And yes, that was painfully slow.) ZFS actually has some oddities in that regard, but at least it tells me how much disk each volume (and snapshot) takes:
root@tubman:~# time df -t zfs -h
Filesystem         Size  Used Avail Use% Mounted on
rpool/ROOT/debian  3.5T  1.4G  3.5T   1% /
rpool/var/tmp      3.5T  384K  3.5T   1% /var/tmp
rpool/var/spool    3.5T  256K  3.5T   1% /var/spool
rpool/var/log      3.5T  2.0G  3.5T   1% /var/log
rpool/home/root    3.5T  2.2G  3.5T   1% /root
rpool/home         3.5T  256K  3.5T   1% /home
rpool/srv          3.5T   80G  3.5T   3% /srv
rpool/var/cache    3.5T  114M  3.5T   1% /var/cache
bpool/BOOT/debian  571M   90M  481M  16% /boot
real    0m0.003s
user    0m0.002s
sys 0m0.000s
That's 56360 times faster, by the way. But yes, that's not fair: those in the know will know there's a different command to do what df does with BTRFS filesystems, the btrfs filesystem usage command:
root@curie:/home/anarcat# time btrfs filesystem usage /srv
Overall:
    Device size:         922.47GiB
    Device allocated:        916.47GiB
    Device unallocated:        6.00GiB
    Device missing:          0.00B
    Used:            884.97GiB
    Free (estimated):         30.84GiB  (min: 27.84GiB)
    Free (statfs, df):        30.84GiB
    Data ratio:               1.00
    Metadata ratio:           2.00
    Global reserve:      512.00MiB  (used: 0.00B)
    Multiple profiles:              no
Data,single: Size:906.45GiB, Used:881.61GiB (97.26%)
   /dev/mapper/fedora_crypt  906.45GiB
Metadata,DUP: Size:5.00GiB, Used:1.68GiB (33.58%)
   /dev/mapper/fedora_crypt   10.00GiB
System,DUP: Size:8.00MiB, Used:128.00KiB (1.56%)
   /dev/mapper/fedora_crypt   16.00MiB
Unallocated:
   /dev/mapper/fedora_crypt    6.00GiB
real    0m0,004s
user    0m0,000s
sys 0m0,004s
Almost as fast as ZFS's df! Good job. But wait. That doesn't actually tell me usage per subvolume. Notice it's filesystem usage, not subvolume usage, which unhelpfully refuses to exist. That command only shows that one "filesystem" internal statistics that are pretty opaque.. You can also appreciate that it's wasting 6GB of "unallocated" disk space there: I probably did something Very Wrong and should be punished by Hacker News. I also wonder why it has 1.68GB of "metadata" used... At this point, I just really want to throw that thing out of the window and restart from scratch. I don't really feel like learning the BTRFS internals, as they seem oblique and completely bizarre to me. It feels a little like the state of PHP now: it's actually pretty solid, but built upon so many layers of cruft that I still feel it corrupts my brain every time I have to deal with it (needle or haystack first? anyone?)...

Conclusion I find BTRFS utterly confusing and I'm worried about its reliability. I think a lot of work is needed on usability and coherence before I even consider running this anywhere else than a lab, and that's really too bad, because there are really nice features in BTRFS that would greatly help my workflow. (I want to use filesystem snapshots as high-performance, high frequency backups.) So now I'm experimenting with OpenZFS. It's so much simpler, just works, and it's rock solid. After this 8 minute read, I had a good understanding of how ZFS worked. Here's the 30 seconds overview:
  • vdev: a RAID array
  • vpool: a volume group of vdevs
  • datasets: normal filesystems (or block device, if you want to use another filesystem on top of ZFS)
There's also other special volumes like caches and logs that you can (really easily, compared to LVM caching) use to tweak your setup. You might also want to look at recordsize or ashift to tweak the filesystem to fit better your workload (or deal with drives lying about their sector size, I'm looking at you Samsung), but that's it. Running ZFS on Linux currently involves building kernel modules from scratch on every host, which I think is pretty bad. But I was able to setup a ZFS-only server using this excellent documentation without too much problem. I'm hoping some day the copyright issues are resolved and we can at least ship binary packages, but the politics (e.g. convincing Debian that is the right thing to do) and the logistics (e.g. DKMS auto-builders? is that even a thing? how about signed DKMS packages? fun-fun-fun!) seem really impractical. Who knows, maybe hell will freeze over (again) and Oracle will fix the CDDL. I personally think that we should just completely ignore this problem (which wasn't even supposed to be a problem) and ship binary packages directly, but I'm a pragmatic and do not always fit well with the free software fundamentalists. All of this to say that, short term, we don't have a reliable, advanced filesystem/logical disk manager in Linux. And that's really too bad.

9 May 2022

Robert McQueen: Evolving a strategy for 2022 and beyond

As a board, we have been working on several initiatives to make the Foundation a better asset for the GNOME Project. We re working on a number of threads in parallel, so I wanted to explain the big picture a bit more to try and connect together things like the new ED search and the bylaw changes. We re all here to see free and open source software succeed and thrive, so that people can be be truly empowered with agency over their technology, rather than being passive consumers. We want to bring GNOME to as many people as possible so that they have computing devices that they can inspect, trust, share and learn from. In previous years we ve tried to boost the relevance of GNOME (or technologies such as GTK) or solicit donations from businesses and individuals with existing engagement in FOSS ideology and technology. The problem with this approach is that we re mostly addressing people and organisations who are already supporting or contributing FOSS in some way. To truly scale our impact, we need to look to the outside world, build better awareness of GNOME outside of our current user base, and find opportunities to secure funding to invest back into the GNOME project. The Foundation supports the GNOME project with infrastructure, arranging conferences, sponsoring hackfests and travel, design work, legal support, managing sponsorships, advisory board, being the fiscal sponsor of GNOME, GTK, Flathub and we will keep doing all of these things. What we re talking about here are additional ways for the Foundation to support the GNOME project we want to go beyond these activities, and invest into GNOME to grow its adoption amongst people who need it. This has a cost, and that means in parallel with these initiatives, we need to find partners to fund this work. Neil has previously talked about themes such as education, advocacy, privacy, but we ve not previously translated these into clear specific initiatives that we would establish in addition to the Foundation s existing work. This is all a work in progress and we welcome any feedback from the community about refining these ideas, but here are the current strategic initiatives the board is working on. We ve been thinking about growing our community by encouraging and retaining diverse contributors, and addressing evolving computing needs which aren t currently well served on the desktop. Initiative 1. Welcoming newcomers. The community is already spending a lot of time welcoming newcomers and teaching them the best practices. Those activities are as time consuming as they are important, but currently a handful of individuals are running initiatives such as GSoC, Outreachy and outreach to Universities. These activities help bring diverse individuals and perspectives into the community, and helps them develop skills and experience of collaborating to create Open Source projects. We want to make those efforts more sustainable by finding sponsors for these activities. With funding, we can hire people to dedicate their time to operating these programs, including paid mentors and creating materials to support newcomers in future, such as developer documentation, examples and tutorials. This is the initiative that needs to be refined the most before we can turn it into something real. Initiative 2: Diverse and sustainable Linux app ecosystem. I spoke at the Linux App Summit about the work that GNOME and Endless has been supporting in Flathub, but this is an example of something which has a great overlap between commercial, technical and mission-based advantages. The key goal here is to improve the financial sustainability of participating in our community, which in turn has an impact on the diversity of who we can expect to afford to enter and remain in our community. We believe the existence of this is critically important for individual developers and contributors to unlock earning potential from our ecosystem, through donations or app sales. In turn, a healthy app ecosystem also improves the usefulness of the Linux desktop as a whole for potential users. We believe that we can build a case for commercial vendors in the space to join an advisory board alongside with GNOME, KDE, etc to input into the governance and contribute to the costs of growing Flathub. Initiative 3: Local-first applications for the GNOME desktop. This is what Thib has been starting to discuss on Discourse, in this thread. There are many different threats to free access to computing and information in today s world. The GNOME desktop and apps need to give users convenient and reliable access to technology which works similarly to the tools they already use everyday, but keeps them and their data safe from surveillance, censorship, filtering or just being completely cut off from the Internet. We believe that we can seek both philanthropic and grant funding for this work. It will make GNOME a more appealing and comprehensive offering for the many people who want to protect their privacy. The idea is that these initiatives all sit on the boundary between the GNOME community and the outside world. If the Foundation can grow and deliver these kinds of projects, we are reaching to new people, new contributors and new funding. These contributions and investments back into GNOME represent a true win-win for the newcomers and our existing community. (Originally posted to GNOME Discourse, please feel free to join the discussion there.)

5 May 2022

Reproducible Builds: Reproducible Builds in April 2022

Welcome to the April 2022 report from the Reproducible Builds project! In these reports, we try to summarise the most important things that we have been up to over the past month. If you are interested in contributing to the project, please take a few moments to visit our Contribute page on our website.

News Cory Doctorow published an interesting article this month about the possibility of Undetectable backdoors for machine learning models. Given that machine learning models can provide unpredictably incorrect results, Doctorow recounts that there exists another category of adversarial examples that comprise a gimmicked machine-learning input that, to the human eye, seems totally normal but which causes the ML system to misfire dramatically that permit the possibility of planting undetectable back doors into any machine learning system at training time .
Chris Lamb published two supporter spotlights on our blog: the first about Amateur Radio Digital Communications (ARDC) and the second about the Google Open Source Security Team (GOSST).
Piergiorgio Ladisa, Henrik Plate, Matias Martinez and Olivier Barais published a new academic paper titled A Taxonomy of Attacks on Open-Source Software Supply Chains (PDF):
This work proposes a general taxonomy for attacks on open-source supply chains, independent of specific programming languages or ecosystems, and covering all supply chain stages from code contributions to package distribution. Taking the form of an attack tree, it covers 107 unique vectors, linked to 94 real-world incidents, and mapped to 33 mitigating safeguards.

Elsewhere in academia, Ly Vu Duc published his PhD thesis. Titled Towards Understanding and Securing the OSS Supply Chain (PDF), Duc s abstract reads as follows:
This dissertation starts from the first link in the software supply chain, developers . Since many developers do not update their vulnerable software libraries, thus exposing the user of their code to security risks. To understand how they choose, manage and update the libraries, packages, and other Open-Source Software (OSS) that become the building blocks of companies completed products consumed by end-users, twenty-five semi-structured interviews were conducted with developers of both large and small-medium enterprises in nine countries. All interviews were transcribed, coded, and analyzed according to applied thematic analysis

Upstream news Filippo Valsorda published an informative blog post recently called How Go Mitigates Supply Chain Attacks outlining the high-level features of the Go ecosystem that helps prevent various supply-chain attacks.
There was new/further activity on a pull request filed against openssl by Sebastian Andrzej Siewior in order to prevent saved CFLAGS (which may contain the -fdebug-prefix-map=<PATH> flag that is used to strip an arbitrary the build path from the debug info if this information remains recorded then the binary is no longer reproducible if the build directory changes.

Events The Linux Foundation s SupplyChainSecurityCon, will take place June 21st 24th 2022, both virtually and in Austin, Texas. Long-time Reproducible Builds and openSUSE contributor Bernhard M. Wiedemann learned that he had his talk accepted, and will speak on Reproducible Builds: Unexpected Benefits and Problems on June 21st.
There will be an in-person Debian Reunion in Hamburg, Germany later this year, taking place from 23 30 May. Although this is a Debian event, there will be some folks from the broader Reproducible Builds community and, of course, everyone is welcome. Please see the event page on the Debian wiki for more information. 41 people have registered so far, and there s approx 10 on-site beds still left.
The minutes and logs from our April 2022 IRC meeting have been published. In case you missed this one, our next IRC meeting will take place on May 31st at 15:00 UTC on #reproducible-builds on the OFTC network.

Debian Roland Clobus wrote another in-depth status update about the status of live Debian images, summarising the current situation that all major desktops build reproducibly with bullseye, bookworm and sid, including the Cinnamon desktop on bookworm and sid, but at a small functionality cost: 14 words will be incorrectly abbreviated . This work incorporated:
  • Reporting an issue about unnecessarily modified timestamps in the daily Debian installer images. [ ]
  • Reporting a bug against the debian-installer: in order to use a suitable kernel version. (#1006800)
  • Reporting a bug in: texlive-binaries regarding the unreproducible content of .fmt files. (#1009196)
  • Adding hacks to make the Cinnamon desktop image reproducible in bookworm and sid. [ ]
  • Added a script to rebuild a live-build ISO image from a given timestamp. [
  • etc.
On our mailing list, Venkata Pyla started a thread on the Debian debconf cache is non-reproducible issue while creating system images and Vagrant Cascadian posted an excellent summary of the reproducibility status of core package sets in Debian and solicited for similar information from other distributions.
Lastly, 122 reviews of Debian packages were added, 44 were updated and 193 were removed this month adding to our extensive knowledge about identified issues. A number of issue types have been updated as well, including timestamps_generated_by_hevea, randomness_in_ocaml_preprocessed_files, build_path_captured_in_emacs_el_file, golang_compiler_captures_build_path_in_binary and build_path_captured_in_assembly_objects,

Other distributions Happy birthday to GNU Guix, which recently turned 10 years old! People have been sharing their stories, in which reproducible builds and bootstrappable builds are a recurring theme as a feature important to its users and developers. The experiences are available on the GNU Guix blog as well as a post on fossandcrafts.org
In openSUSE, Bernhard M. Wiedemann posted his usual monthly reproducible builds status report.

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

diffoscope diffoscope is our in-depth and content-aware diff utility. Not only can it locate and diagnose reproducibility issues, it can provide human-readable diffs from many kinds of binary formats. This month, Chris Lamb prepared and uploaded versions 210 and 211 to Debian unstable, as well as noticed that some Python .pyc files are reported as data, so we should support .pyc as a fallback filename extension [ ]. In addition, Mattia Rizzolo disabled the Gnumeric tests in Debian as the package is not currently available [ ] and dropped mplayer from Build-Depends too [ ]. In addition, Mattia fixed an issue to ensure that the PATH environment variable is properly modified for all actions, not just when running the comparator. [ ]

Testing framework The Reproducible Builds project runs a significant testing framework at tests.reproducible-builds.org, to check packages and other artifacts for reproducibility. This month, the following changes were made:
  • Daniel Golle:
    • Prefer a different solution to avoid building all OpenWrt packages; skip packages from optional community feeds. [ ]
  • Holger Levsen:
    • Detect Python deprecation warnings in the node health check. [ ]
    • Detect failure to build the Debian Installer. [ ]
  • Mattia Rizzolo:
    • Install disorderfs for building OpenWrt packages. [ ]
  • Paul Spooren (OpenWrt-related changes):
    • Don t build all packages whilst the core packages are not yet reproducible. [ ]
    • Add a missing RUN directive to node_cleanup. [ ]
    • Be less verbose during a toolchain build. [ ]
    • Use disorderfs for rebuilds and update the documentation to match. [ ][ ][ ]
  • Roland Clobus:
    • Publish the last reproducible Debian ISO image. [ ]
    • Use the rebuild.sh script from the live-build package. [ ]
Lastly, node maintenance was also performed by Holger Levsen [ ][ ].
If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

27 April 2022

Antoine Beaupr : building Debian packages under qemu with sbuild

I've been using sbuild for a while to build my Debian packages, mainly because it's what is used by the Debian autobuilders, but also because it's pretty powerful and efficient. Configuring it just right, however, can be a challenge. In my quick Debian development guide, I had a few pointers on how to configure sbuild with the normal schroot setup, but today I finished a qemu based configuration.

Why I want to use qemu mainly because it provides better isolation than a chroot. I sponsor packages sometimes and while I typically audit the source code before building, it still feels like the extra protection shouldn't hurt. I also like the idea of unifying my existing virtual machine setup with my build setup. My current VM is kind of all over the place: libvirt, vagrant, GNOME Boxes, etc?). I've been slowly converging over libvirt however, and most solutions I use right now rely on qemu under the hood, certainly not chroots... I could also have decided to go with containers like LXC, LXD, Docker (with conbuilder, whalebuilder, docker-buildpackage), systemd-nspawn (with debspawn), unshare (with schroot --chroot-mode=unshare), or whatever: I didn't feel those offer the level of isolation that is provided by qemu. The main downside of this approach is that it is (obviously) slower than native builds. But on modern hardware, that cost should be minimal.

How Basically, you need this:
sudo mkdir -p /srv/sbuild/qemu/
sudo apt install sbuild-qemu
sudo sbuild-qemu-create -o /srv/sbuild/qemu/unstable.img unstable https://deb.debian.org/debian
Then to make this used by default, add this to ~/.sbuildrc:
# run autopkgtest inside the schroot
$run_autopkgtest = 1;
# tell sbuild to use autopkgtest as a chroot
$chroot_mode = 'autopkgtest';
# tell autopkgtest to use qemu
$autopkgtest_virt_server = 'qemu';
# tell autopkgtest-virt-qemu the path to the image
# use --debug there to show what autopkgtest is doing
$autopkgtest_virt_server_options = [ '--', '/srv/sbuild/qemu/%r-%a.img' ];
# tell plain autopkgtest to use qemu, and the right image
$autopkgtest_opts = [ '--', 'qemu', '/srv/sbuild/qemu/%r-%a.img' ];
# no need to cleanup the chroot after build, we run in a completely clean VM
$purge_build_deps = 'never';
# no need for sudo
$autopkgtest_root_args = '';
Note that the above will use the default autopkgtest (1GB, one core) and qemu (128MB, one core) configuration, which might be a little low on resources. You probably want to be explicit about this, with something like this:
# extra parameters to pass to qemu
# --enable-kvm is not necessary, detected on the fly by autopkgtest
my @_qemu_options = ['--ram-size=4096', '--cpus=2'];
# tell autopkgtest-virt-qemu the path to the image
# use --debug there to show what autopkgtest is doing
$autopkgtest_virt_server_options = [ @_qemu_options, '--', '/srv/sbuild/qemu/%r-%a.img' ];
$autopkgtest_opts = [ '--', 'qemu', @qemu_options, '/srv/sbuild/qemu/%r-%a.img'];
This configuration will:
  1. create a virtual machine image in /srv/sbuild/qemu for unstable
  2. tell sbuild to use that image to create a temporary VM to build the packages
  3. tell sbuild to run autopkgtest (which should really be default)
  4. tell autopkgtest to use qemu for builds and for tests
Note that the VM created by sbuild-qemu-create have an unlocked root account with an empty password.

Other useful tasks
  • enter the VM to make test, changes will be discarded (thanks Nick Brown for the sbuild-qemu-boot tip!):
     sbuild-qemu-boot /srv/sbuild/qemu/unstable-amd64.img
    
    That program is shipped only with bookworm and later, an equivalent command is:
     qemu-system-x86_64 -snapshot -enable-kvm -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,id=rng-device0 -m 2048 -nographic /srv/sbuild/qemu/unstable-amd64.img
    
    The key argument here is -snapshot.
  • enter the VM to make permanent changes, which will not be discarded:
     sudo sbuild-qemu-boot --readwrite /srv/sbuild/qemu/unstable-amd64.img
    
    Equivalent command:
     sudo qemu-system-x86_64 -enable-kvm -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,id=rng-device0 -m 2048 -nographic /srv/sbuild/qemu/unstable-amd64.img
    
  • update the VM (thanks lavamind):
     sudo sbuild-qemu-update /srv/sbuild/qemu/unstable-amd64.img
    
  • build in a specific VM regardless of the suite specified in the changelog (e.g. UNRELEASED, bookworm-backports, bookworm-security, etc):
     sbuild --autopkgtest-virt-server-opts="-- qemu /var/lib/sbuild/qemu/bookworm-amd64.img"
    
    Note that you'd also need to pass --autopkgtest-opts if you want autopkgtest to run in the correct VM as well:
     sbuild --autopkgtest-opts="-- qemu /var/lib/sbuild/qemu/unstable.img" --autopkgtest-virt-server-opts="-- qemu /var/lib/sbuild/qemu/bookworm-amd64.img"
    
    You might also need parameters like --ram-size if you customized it above.
And yes, this is all quite complicated and could be streamlined a little, but that's what you get when you have years of legacy and just want to get stuff done. It seems to me autopkgtest-virt-qemu should have a magic flag starts a shell for you, but it doesn't look like that's a thing. When that program starts, it just says ok and sits there. Maybe because the authors consider the above to be simple enough (see also bug #911977 for a discussion of this problem).

Live access to a running test When autopkgtest starts a VM, it uses this funky qemu commandline:
qemu-system-x86_64 -m 4096 -smp 2 -nographic -net nic,model=virtio -net user,hostfwd=tcp:127.0.0.1:10022-:22 -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,id=rng-device0 -monitor unix:/tmp/autopkgtest-qemu.w1mlh54b/monitor,server,nowait -serial unix:/tmp/autopkgtest-qemu.w1mlh54b/ttyS0,server,nowait -serial unix:/tmp/autopkgtest-qemu.w1mlh54b/ttyS1,server,nowait -virtfs local,id=autopkgtest,path=/tmp/autopkgtest-qemu.w1mlh54b/shared,security_model=none,mount_tag=autopkgtest -drive index=0,file=/tmp/autopkgtest-qemu.w1mlh54b/overlay.img,cache=unsafe,if=virtio,discard=unmap,format=qcow2 -enable-kvm -cpu kvm64,+vmx,+lahf_lm
... which is a typical qemu commandline, I'm sorry to say. That gives us a VM with those settings (paths are relative to a temporary directory, /tmp/autopkgtest-qemu.w1mlh54b/ in the above example):
  • the shared/ directory is, well, shared with the VM
  • port 10022 is forward to the VM's port 22, presumably for SSH, but not SSH server is started by default
  • the ttyS1 and ttyS2 UNIX sockets are mapped to the first two serial ports (use nc -U to talk with those)
  • the monitor UNIX socket is a qemu control socket (see the QEMU monitor documentation, also nc -U)
In other words, it's possible to access the VM with:
nc -U /tmp/autopkgtest-qemu.w1mlh54b/ttyS2
The nc socket interface is ... not great, but it works well enough. And you can probably fire up an SSHd to get a better shell if you feel like it.

Nitty-gritty details no one cares about

Fixing hang in sbuild cleanup I'm having a hard time making heads or tails of this, but please bear with me. In sbuild + schroot, there's this notion that we don't really need to cleanup after ourselves inside the schroot, as the schroot will just be delted anyways. This behavior seems to be handled by the internal "Session Purged" parameter. At least in lib/Sbuild/Build.pm, we can see this:
my $is_cloned_session = (defined ($session->get('Session Purged')) &&
             $session->get('Session Purged') == 1) ? 1 : 0;
[...]
if ($is_cloned_session)  
$self->log("Not cleaning session: cloned chroot in use\n");
  else  
if ($purge_build_deps)  
    # Removing dependencies
    $resolver->uninstall_deps();
  else  
    $self->log("Not removing build depends: as requested\n");
 
 
The schroot builder defines that parameter as:
    $self->set('Session Purged', $info-> 'Session Purged' );
... which is ... a little confusing to me. $info is:
my $info = $self->get('Chroots')->get_info($schroot_session);
... so I presume that depends on whether the schroot was correctly cleaned up? I stopped digging there... ChrootUnshare.pm is way more explicit:
$self->set('Session Purged', 1);
I wonder if we should do something like this with the autopkgtest backend. I guess people might technically use it with something else than qemu, but qemu is the typical use case of the autopkgtest backend, in my experience. Or at least certainly with things that cleanup after themselves. Right? For some reason, before I added this line to my configuration:
$purge_build_deps = 'never';
... the "Cleanup" step would just completely hang. It was quite bizarre.

Disgression on the diversity of VM-like things There are a lot of different virtualization solutions one can use (e.g. Xen, KVM, Docker or Virtualbox). I have also found libguestfs to be useful to operate on virtual images in various ways. Libvirt and Vagrant are also useful wrappers on top of the above systems. There are particularly a lot of different tools which use Docker, Virtual machines or some sort of isolation stronger than chroot to build packages. Here are some of the alternatives I am aware of: Take, for example, Whalebuilder, which uses Docker to build packages instead of pbuilder or sbuild. Docker provides more isolation than a simple chroot: in whalebuilder, packages are built without network access and inside a virtualized environment. Keep in mind there are limitations to Docker's security and that pbuilder and sbuild do build under a different user which will limit the security issues with building untrusted packages. On the upside, some of things are being fixed: whalebuilder is now an official Debian package (whalebuilder) and has added the feature of passing custom arguments to dpkg-buildpackage. None of those solutions (except the autopkgtest/qemu backend) are implemented as a sbuild plugin, which would greatly reduce their complexity. I was previously using Qemu directly to run virtual machines, and had to create VMs by hand with various tools. This didn't work so well so I switched to using Vagrant as a de-facto standard to build development environment machines, but I'm returning to Qemu because it uses a similar backend as KVM and can be used to host longer-running virtual machines through libvirt. The great thing now is that autopkgtest has good support for qemu and sbuild has bridged the gap and can use it as a build backend. I originally had found those bugs in that setup, but all of them are now fixed:
  • #911977: sbuild: how do we correctly guess the VM name in autopkgtest?
  • #911979: sbuild: fails on chown in autopkgtest-qemu backend
  • #911963: autopkgtest qemu build fails with proxy_cmd: parameter not set
  • #911981: autopkgtest: qemu server warns about missing CPU features
So we have unification! It's possible to run your virtual machines and Debian builds using a single VM image backend storage, which is no small feat, in my humble opinion. See the sbuild-qemu blog post for the annoucement Now I just need to figure out how to merge Vagrant, GNOME Boxes, and libvirt together, which should be a matter of placing images in the right place... right? See also hosting.

pbuilder vs sbuild I was previously using pbuilder and switched in 2017 to sbuild. AskUbuntu.com has a good comparative between pbuilder and sbuild that shows they are pretty similar. The big advantage of sbuild is that it is the tool in use on the buildds and it's written in Perl instead of shell. My concerns about switching were POLA (I'm used to pbuilder), the fact that pbuilder runs as a separate user (works with sbuild as well now, if the _apt user is present), and setting up COW semantics in sbuild (can't just plug cowbuilder there, need to configure overlayfs or aufs, which was non-trivial in Debian jessie). Ubuntu folks, again, have more documentation there. Debian also has extensive documentation, especially about how to configure overlays. I was ultimately convinced by stapelberg's post on the topic which shows how much simpler sbuild really is...

Who Thanks lavamind for the introduction to the sbuild-qemu package.

14 April 2022

Reproducible Builds: Supporter spotlight: Amateur Radio Digital Communications (ARDC)

The Reproducible Builds project relies on several projects, supporters and sponsors for financial support, but they are also valued as ambassadors who spread the word about the project and the work that we do. This is the third instalment in a series featuring the projects, companies and individuals who support the Reproducible Builds project. If you are a supporter of the Reproducible Builds project (of whatever size) and would like to be featured here, please let get in touch with us at contact@reproducible-builds.org. We started this series by featuring the Civil Infrastructure Platform project and followed this up with a post about the Ford Foundation. Today, however, we ll be talking with Dan Romanchik, Communications Manager at Amateur Radio Digital Communications (ARDC).
Chris Lamb: Hey Dan, it s nice to meet you! So, for someone who has not heard of Amateur Radio Digital Communications (ARDC) before, could you tell us what your foundation is about? Dan: Sure! ARDC s mission is to support, promote, and enhance experimentation, education, development, open access, and innovation in amateur radio, digital communication, and information and communication science and technology. We fulfill that mission in two ways:
  1. We administer an allocation of IP addresses that we call 44Net. These IP addresses (in the 44.0.0.0/8 IP range) can only be used for amateur radio applications and experimentation.
  2. We make grants to organizations whose work aligns with our mission. This includes amateur radio clubs as well as other amateur radio-related organizations and activities. Additionally, we support scholarship programs for people who either have an amateur radio license or are pursuing careers in technology, STEM education and open-source software development projects that fit our mission, such as Reproducible Builds.

Chris: How might you relate the importance of amateur radio and similar technologies to someone who is non-technical? Dan: Amateur radio is important in a number of ways. First of all, amateur radio is a public service. In fact, the legal name for amateur radio is the Amateur Radio Service, and one of the primary reasons that amateur radio exists is to provide emergency and public service communications. All over the world, amateur radio operators are prepared to step up and provide emergency communications when disaster strikes or to provide communications for events such as marathons or bicycle tours. Second, amateur radio is important because it helps advance the state of the art. By experimenting with different circuits and communications techniques, amateurs have made significant contributions to communications science and technology. Third, amateur radio plays a big part in technical education. It enables students to experiment with wireless technologies and electronics in ways that aren t possible without a license. Amateur radio has historically been a gateway for young people interested in pursuing a career in engineering or science, such as network or electrical engineering. Fourth and this point is a little less obvious than the first three amateur radio is a way to enhance international goodwill and community. Radio knows no boundaries, of course, and amateurs are therefore ambassadors for their country, reaching out to all around the world. Beyond amateur radio, ARDC also supports and promotes research and innovation in the broader field of digital communication and information and communication science and technology. Information and communication technology plays a big part in our lives, be it for business, education, or personal communications. For example, think of the impact that cell phones have had on our culture. The challenge is that much of this work is proprietary and owned by large corporations. By focusing on open source work in this area, we help open the door to innovation outside of the corporate landscape, which is important to overall technological resiliency.
Chris: Could you briefly outline the history of ARDC? Dan: Nearly forty years ago, a group of visionary hams saw the future possibilities of what was to become the internet and requested an address allocation from the Internet Assigned Numbers Authority (IANA). That allocation included more than sixteen million IPv4 addresses, 44.0.0.0 through 44.255.255.255. These addresses have been used exclusively for amateur radio applications and experimentation with digital communications techniques ever since. In 2011, the informal group of hams administering these addresses incorporated as a nonprofit corporation, Amateur Radio Digital Communications (ARDC). ARDC is recognized by IANA, ARIN and the other Internet Registries as the sole owner of these addresses, which are also known as AMPRNet or 44Net. Over the years, ARDC has assigned addresses to thousands of hams on a long-term loan (essentially acting as a zero-cost lease), allowing them to experiment with digital communications technology. Using these IP addresses, hams have carried out some very interesting and worthwhile research projects and developed practical applications, including TCP/IP connectivity via radio links, digital voice, telemetry and repeater linking. Even so, the amateur radio community never used much more than half the available addresses, and today, less than one third of the address space is assigned and in use. This is one of the reasons that ARDC, in 2019, decided to sell one quarter of the address space (or approximately 4 million IP addresses) and establish an endowment with the proceeds. This endowment now funds ARDC s a suite of grants, including scholarships, research projects, and of course amateur radio projects. Initially, ARDC was restricted to awarding grants to organizations in the United States, but is now able to provide funds to organizations around the world.
Chris: How does the Reproducible Builds effort help ARDC achieve its goals? Dan: Our aspirational goals include: We think that the Reproducible Builds efforts in helping to ensure the safety and security of open source software closely align with those goals.
Chris: Are there any specific success stories that ARDC is particularly proud of? Dan: We are really proud of our grant to the Hoopa Valley Tribe in California. With a population of nearly 2,100, their reservation is the largest in California. Like everywhere else, the COVID-19 pandemic hit the reservation hard, and the lack of broadband internet access meant that 130 children on the reservation were unable to attend school remotely. The ARDC grant allowed the tribe to address the immediate broadband needs in the Hoopa Valley, as well as encourage the use of amateur radio and other two-way communications on the reservation. The first nation was able to deploy a network that provides broadband access to approximately 90% of the residents in the valley. And, in addition to bringing remote education to those 130 children, the Hoopa now use the network for remote medical monitoring and consultation, adult education, and other applications. Other successes include our grants to:
Chris: ARDC supports a number of other existing projects and initiatives, not all of them in the open source world. How much do you feel being a part of the broader free culture movement helps you achieve your aims? Dan: In general, we find it challenging that most digital communications technology is proprietary and closed-source. It s part of our mission to fund open source alternatives. Without them, we are solely reliant, as a society, on corporate interests for our digital communication needs. It makes us vulnerable and it puts us at risk of increased surveillance. Thus, ARDC supports open source software wherever possible, and our grantees must make a commitment to share their work under an open source license or otherwise make it as freely available as possible.
Chris: Thanks so much for taking the time to talk to us today. Now, if someone wanted to know more about ARDC or to get involved, where might they go to look? To learn more about ARDC in general, please visit our website at https://www.ampr.org. To learn more about 44Net, go to https://wiki.ampr.org/wiki/Main_Page. And, finally, to learn more about our grants program, go to https://www.ampr.org/apply/

For more about the Reproducible Builds project, please see our website at reproducible-builds.org. If you are interested in ensuring the ongoing security of the software that underpins our civilisation and wish to sponsor the Reproducible Builds project, please reach out to the project by emailing contact@reproducible-builds.org.

Next.

Previous.